Note - This is text version.
If you need examples, images, text - Go to:
Here are some of the major differences:
1/ GPT-4 can ‘see’ images now
The most noticeable change to GPT-4 is that it’s multimodal, allowing it to understand more than one modality of information. GPT-3 and ChatGPT’s GPT-3.5 were limited to textual input and output, meaning they could only read and write. However, GPT-4 can be fed images and asked to output information accordingly.
If this reminds you of Google Lens, then that’s understandable. But Lens only searches for information related to an image. GPT-4 is a lot more advanced in that it understands an image and analyses it. An example provided by OpenAI showed the language model explaining the joke in an image of an absurdly large iPhone connector. The only catch is that image inputs are still a research preview and are not publicly available.
2/ GPT-4 is harder to trick