When using IHW one have to provide an alpha value to the ihw function. Is this alpha used in some manner in the calculation of the weighted p-values? Basically if I at a later stage decide to use another alpha cutoff than I used to run IHW do i need to re-run IHW or can I just apply it directly to the adj_pvalue provided by IHW?
That is an excellent question!
The weights are indeed calibrated towards a specific alpha value. So if you run IHW with different alphas, you will get different weights in general. Furthermore, if you run IHW at a specific alpha, but then use another alpha' cutoff on the adjusted p-values this will often lead to less rejections.  provides some further insight into why the optimal weights depend on the chosen alpha.
The above having been said, if you reject adjusted p-values below some alpha' cutoff that is different than the original alpha you specified, then FDR (false discovery rate) control will still hold. So it is actually okay to do so! In fact, if you want to choose the alpha cutoff in a more exploratory way, then I would recommend doing it this way, i.e. using a reasonable alpha to begin with, and then thresholding by adjusted p-values. While you might lose some power, the advantage is that the lists of rejections will be nested and also you will not need to keep repeating the computation.
As a side-remark, in statistics papers, usually the proofs assume that you choose an alpha before doing the analysis and then stick to that. And it is not always entirely clear if it is OK to change the alpha in a post-hoc way after you looked at the adjusted p-values. For FDR procedures (the q-value and BH procedures specifically), Storey, Taylor and Siegmund  gave a justification that it is fine (more-or-less) if you choose your alpha later on. Katsevich and Ramdas recently revisited this issue , essentially also concluding as above.
 Roquain, Etienne, and Mark A. Van De Wiel. "Optimal weighting for false discovery rate control." Electronic journal of Statistics 3 (2009): 678-711.
 Storey, John D., Jonathan E. Taylor, and David Siegmund. "Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66.1 (2004): 187-205.
 Katsevich, Eugene, and Aaditya Ramdas. "Towards" simultaneous selective inference": post-hoc bounds on the false discovery proportion." arXiv preprint arXiv:1803.06790 (2018).