I've set some of these models to work synthesising summaries etc. based on my own blog posts; LLMs are surprisingly middling at synthesising info from documents — I've seen even good models elide significant content, go on distracted rambles about other topics in the same area, and even invert the meanings of points being made.
Use them the way you'd have used Wikipedia in 2008: a starting point from which you can do actual research, but you have to watch out for a lot of unverified junk as well.
Use them the way you'd have used Wikipedia in 2008: a starting point from which you can do actual research, but you have to watch out for a lot of unverified junk as well.