Reason of ignoring normalizing constant

When approximating the posterior distribution, one tends to ignore the normalizing constant in the equation as \[ \pi(\theta|x)=\frac{\pi(x|\theta)\pi(\theta)}{\int\pi(x|\theta)\pi(\theta)d\theta}\\ =\frac{\pi(x|\theta)\pi(\theta)}{\pi(x)}\\ \propto\pi(x|\theta)\pi(\theta) \] The fact is, not all the MCMC methods avoid the need for the normalizing constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio \[ \begin{gather*} R(\theta_1,\theta_2)=\frac{\pi(\theta_1|x)}{\pi(\theta_2|x)}\\ =\frac{\pi(x|\theta_1)\pi(\theta_1)}{\pi(x)}/\frac{\pi(x|\theta_2)\pi(\theta_2)}{\pi(x)}\\ =\frac{\pi(x|\theta_1)\pi(\theta_1)}{\pi(x|\theta_2)\pi(\theta_2)} \end{gather*} \] You can easily observe that \(R(\theta_1,\theta_2)\) does not involve the normalizing constant, so actually you can cancels out the normalizing constant \(\pi(x)\) when approximating the posterior distribution \(\pi(\theta|x)\). Therefore, you can easily state that \(\pi(\theta|x)\) is proportional to the \(\pi(x|\theta)\pi(\theta)\) and utilize this information to approximate the posterior distribution.

Reference

Normalizing constant irrelevant in Bayes Theorem?

Rationale behind ignoring the “denominator” in Bayes Rule