About ChatGPT

GPT-3.5 is described as a Large Language Model (LLM). The version of ChatGPT powered by GPT-3.5 was trained on sizeable inputs of written text and is capable of processing natural language, understanding context, analysing content and making predictions. Like other LLMs, the version of ChatGPT powered by GPT-3.5 works through modelling the statistical probability that a particular ‘token’ (in this case, words or individual characters or punctuation marks) will appear after another one: it presents information based on the statistical likelihood of a series of words appearing together. Essentially, it’s a super powerful predictive text device.

The technical report for GPT-4 describes the tool as a ‘large multimodal model’. This is because, unlike its large language model GPT-3.5 predecessor, GPT-4 can respond to both image and text inputs (though the image input function is currently only available through the paid version). GPT-4 further has the capacity to process significantly more ‘tokens’: 32,000 compared to GPT-3.5’s 8,000. This is the equivalent of approximately 50 pages of text, placing GPT-4’s processing ability at around a Masters level dissertation.

Both the GPT-3.5 and GPT-4 versions of ChatGPT have the ability to mimic human linguistic style in their presentation of information thanks to the conversational interface. Because of this, many users have fallen into the trap of anthropomorphising the technology-based tool and assuming that it “knows” what it is talking about.