I don't think regular expressions is the right approach for this, at least if I understand your question correctly. As I understand you, you want to check whether there is an anagram of each of the words in the text. For this, you should just find a "normalized" form for those words (e.g. the lower-cased letters in sorted order) and check whether they are all in the text's normalized words.
>>> text = "some text with the words sd#ay and phayp in it"
>>> words = "happy", "#days"
>>> norm = lambda s: ''.join(sorted(s.lower()))
>>> len(set(map(norm, text.split())) & set(map(norm, words))) == 2
True
This will normalize each word in the text and the words-list exactly once, which (when sorting) takes O(nlogn) and could be reduced to O(n) (character counts), and then just a single set lookup for each normalized word, as opposed to searching for all permutations of words and characters in the words.
Of course, this assumes you want to match entire words, and not parts of words or e.g. DNA subsequences. You can (and probably should) use regular expressions instead of just split()
to split the text into words, though, e.g. taking punctuation into account.