The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.
The search results are generated by the good old search engine, the "AI summary" option at the top is just doing the reading for you.
And of course if the answer isn't trivial, very likely generating an inaccurate or incorrect output from the inputs.
But none of that changes how the underlying search engine works. It's just doing additional work on the same results the same search engine generates.
EDIT: Just to clarify, DDG also has a "chat" service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it's not what we're talking about.