The ability of large language models to do impressive tasks has to do both with their intellect and with their knowledge.
These two can often overlap in many places and it’s often not clear, cut between where one starts and where one ends.
Intellect can be defined as its ability to reason and knowledge is the amount of information it has to work with to retrieve for you.
Large language models are typically used as a standalone (basically as a replacement for Google) because of their knowledge. But you can also attach it to knowledge basis to convert it into an RAG.
Standalone LLMs have been trained on vast amounts of data from the Internet and we use them more for their knowledge many times than their intellect.
Interesting studies have come out recently showing that some of the leading models (especially the 7B ones) like mistral and llama perform much much better with the use of an RAG.
This indicates that perhaps it’s the intellect, which is strong in these models, but the knowledge component can be easily replaced by an actual knowledge base for retrieval. This ensures the data that works with is always relevant and up-to-date.