GPT-4

2023 - 3 - 15

gpt 4 gpt 4

Post cover
Image courtesy of "The Guardian"

OpenAI says new model GPT-4 is more creative and less likely to ... (The Guardian)

The artificial intelligence research lab OpenAI has released GPT-4, the latest version of the groundbreaking AI system that powers ChatGPT, which it says is ...

But that can mean that it makes up information when it doesn’t know the exact answer – an issue known as “hallucination” – or that it provides upsetting or abusive responses when given the wrong prompts. At one point in the demo, GPT-4 was asked to describe why an image of a squirrel with a camera was funny. During a demo of GPT-4 on Tuesday, Open AI president and co-founder Greg Brockman also gave users a sneak peek at the image-recognition capabilities of the newest version of the system, which is not yet publicly available and only being tested by a company called Be My Eyes. At the other end of the spectrum, payment processing company Stripe is using GPT-4 to answer support questions from corporate users and to help flag potential scammers in the company’s support forums. The new version can handle massive text inputs and can remember and act on more than 20,000 words at once, letting it take an entire novella as a prompt. “We had been using it for personalizing lessons and running Duolingo English tests.

Post cover
Image courtesy of "Search Engine Journal"

OpenAI Releases GPT-4: Now Available In ChatGPT & Bing (Search Engine Journal)

Open AI launches GPT-4. Discover the capabilities and limitations of the latest AI model, which has human-level performance in various professional and ...

A significant focus of the GPT-4 project has been building a deep learning stack that scales predictably. Like previous GPT models, the GPT-4 base model was trained to predict the next word in a document using publicly available data and data licensed by OpenAI. GPT-4 can accept both text and images as input, making it capable of generating text outputs based on inputs consisting of both text and images. As a result, OpenAI has made many improvements to GPT-4 to make it safer than GPT-3.5. By the end, you’ll better understand the potential impact of GPT-4 and what it is and isn’t capable of. GPT-4 is a large multimodal model that can accept image and text inputs and generate text outputs.

Post cover
Image courtesy of "Forbes"

GPT-4 Beats 90% Of Lawyers Trying To Pass The Bar (Forbes)

In 1997, IBM's Deep Blue defeated the reigning world champion chess player, Garry Kasparov. In 2016, Google's AlphaGo defeated one of the worlds top Go ...

But it’s hard to believe that almost every aspect of almost everything we do in business and education won’t be impacted by AI this good. "AI technology will continue to bring empowering effects to users and industry sectors,” says Xueqing Zhang, another IDC analyst. It is important to remember that any actions that could endanger people or property are not only illegal but also unethical and potentially life-threatening. "Companies that are slow to adopt AI will be left behind – large and small,” says IDC analyst Mike Glennon. “We spent 6 months making GPT-4 safer and more aligned,” the company says. That spending will grow at 27% per year, IDC says, and will surpass $300 billion in 2026.

Post cover
Image courtesy of "The New York Times"

OpenAI Unveils GPT-4, Months After ChatGPT Stunned Silicon Valley (The New York Times)

The company unveiled new technology called GPT-4 four months after its ChatGPT stunned Silicon Valley. The update is an improvement, but it carries some of ...

He gave the new chatbot an image from the Hubble Space Telescope and asked it to describe the photo “in painstaking detail.” It responded with a four-paragraph description, which included an explanation of the ethereal white line that stretched across the photo. “It doesn’t grasp the nuance of what is funny,” said Oren Etzioni, the founding chief executive of the Allen Institute for AI, a prominent lab in Seattle. “This takes the technology into a whole new domain.” It is the same technology that digital assistants like Siri use to recognizes spoken commands and self-driving cars use to identify pedestrians. Given a lengthy article from The New York Times and asked to summarize it, the bot will give a precise summary nearly every time. Then, using a technique called reinforcement learning, the system spent months analyzing those ratings and gaining a better understanding of what it should and should not do. Now the company is back with a new version of the technology that powers its chatbots. The technology already drives the chatbot available to a limited number of people using Microsoft’s Bing search engine. Khan Academy, an online education company, is using the technology to build an automated tutor. Because the systems can write computer programs and perform other business tasks, they are also on the cusp of changing the nature of work. It was designed to be the underlying engine that powers chatbots and all sorts of other systems, from search engines to personal online tutors. It is an expert on some subjects and a dilettante on others.

Post cover
Image courtesy of "Gizmodo"

OpenAI Levels Up With Newly Released GPT-4 (Gizmodo)

OpenAI's newly-unveiled GPT-4 is being released to limited ChatGPT Plus subscribers and to select businesses using the company's API.

Even if the new system is better than before, there’s still plenty of room for the AI to be abused. He then showed how users can instill the system with new information for it to parse, adding parameters to make the AI more aware of its role. [waitlist for the GPT-4 API](https://openai.com/waitlist/gpt-4-api) or be one of the lucky few selected ChatGPT Plus subscribers. Chamber of Commerce recently said in 10 years, [virtually every company and government entity will be up on this AI tech](https://gizmodo.com/ai-chatgpt-artificial-intelligence-chatbot-bing-1850207036). [multi-billion dollar arrangement with Microsoft](https://gizmodo.com/microsoft-openai-chatgpt-bing-1850019904) to train GPT-4 on Microsoft Azure supercomputers. The new system also includes the ability to accept images as inputs, allowing the system to generate captions, or provide analyses of an image. The company used the example of an image with a few ingredients, and the system provided some examples for what food those ingredients could create. [been long anticipating the release of GPT-4](https://gizmodo.com/ai-chatgpt-dalle-midjourney-stable-diffusion-deepfake-1849910573), the latest edition of the company’s large language model. OpenAI said GPT-4 scores in the 90th percentile of the Uniform Bar Exam and the 99th percentile of the Biology Olympiad. The new system is now capable of handling over 25,000 words of text, according to the company. Now the company has a new version of its AI language generator that, at least on paper, seems purpose built to upend multiple industries even beyond the tech space. The company claimed GPT-4 is more accurate and more capable of solving problems.

Post cover
Image courtesy of "The Atlantic"

GPT-4 Is Here. Just How Powerful Is It? (The Atlantic)

Less than four months after releasing ChatGPT, the text-generating AI that seems to have pushed us into a science-fictional age of technology, OpenAI has ...

New Bing, which runs a version of GPT-4, has written its own share of disturbing and offensive text—teaching children [ethnic slurs](https://www.pcworld.com/article/1507512/microsofts-new-ai-bing-taught-my-son-ethnic-slurs-and-im-horrified.html), promoting [Nazi slogans](https://gizmodo.com/ai-bing-microsoft-chatgpt-heil-hitler-prompt-google-1850109362), [inventing](https://www.theatlantic.com/technology/archive/2023/03/ai-chatbots-large-language-model-misinformation/673376/) scientific theories. [boilerplate copy](https://www.theatlantic.com/technology/archive/2023/02/use-openai-chatgpt-playground-at-work/673195/), many critics say they fundamentally [don’t and perhaps cannot](https://aclanthology.org/2020.acl-main.463.pdf) understand the world. GPT-2 displayed [bias against women](https://aclanthology.org/D19-1339.pdf), queer people, and other demographic groups; GPT-3 said [racist](https://www.wired.com/story/efforts-make-text-ai-less-racist-terrible/) and sexist things; and ChatGPT was accused of making similarly toxic [comments](https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-chatbot-messages/672411/). [AI gold rush](https://www.theatlantic.com/technology/archive/2023/02/google-bing-race-to-launch-ai-chatbot-powered-search-engines/673006/). As the University of Washington linguist and [prominent AI critic](https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html) Emily Bender told me via email: “We generally don’t eat food whose ingredients we don’t know or can’t find out.” [redefine human cognition](https://www.theatlantic.com/technology/archive/2022/10/hans-niemann-chess-cheating-artificial-intelligence/671799/) and creativity, much as the internet, writing, or even [fire](https://www.theverge.com/2018/1/19/16911354/google-ceo-sundar-pichai-ai-artificial-intelligence-fire-electricity-jobs-cancer) did before. These models generate answers with the illusion of omniscience, which means they can easily spread [convincing lies](https://www.theatlantic.com/technology/archive/2023/03/generative-ai-disinformation-synthetic-media-history/673260/) and [reprehensible hate](https://www.technologyreview.com/2021/05/20/1025135/ai-large-language-models-bigscience-project/). They are something like [autocomplete on PCP](https://www.theatlantic.com/technology/archive/2023/02/google-microsoft-search-engine-chatbots-unreliability/673081/), a drug that gives users a false sense of invincibility and heightened capacities for delusion. Some [linguists and cognitive scientists](https://lingbuzz.net/lingbuzz/007180) believe that these AI models show a decent grasp of syntax and, at least [according to OpenAI](https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html), perhaps even a glimmer of understanding or reasoning—although the latter point is very controversial, and formal grammatical fluency remains [far off from being able to think](https://www.theatlantic.com/technology/archive/2023/01/chatgpt-ai-language-human-computer-grammar-logic/672902/). [countless other sources](https://www.theatlantic.com/technology/archive/2023/01/artificial-intelligence-ai-chatgpt-dall-e-2-learning/672754/) to find these deep statistical patterns; OpenAI has also started using human researchers to fine-tune its models’ outputs. [Email didn’t speed up communication](https://twitter.com/sama/status/1627110888321978368) so much as turn each day into an email-answering slog; [electronic health records](https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers) should save doctors time but in fact force them to spend many extra, uncompensated hours updating and conferring with these databases. Today’s [announcement](https://openai.com/research/gpt-4) suggests that GPT-4’s abilities, while impressive, are more modest: It performs better than the previous model on standardized tests and other benchmarks, works across dozens of languages, and can take images as input—meaning that it’s able, for instance, to describe the contents of a photo or a chart.

Post cover
Image courtesy of "PetaPixel"

OpenAI Announces Chat GPT-4, an AI That Can Understand Photos (PetaPixel)

OpenAI will release GPT-4 next week and it will allow users to turn text into video, according to Microsoft Germany CTO.

In the Kosmos-1 presentation, the AI can read images along with a photo. For example, it can tell the user what is unusual about the below photo of a man ironing his clothes while attached to a taxi. “Not what I intended at all.” “Over a range of domains — including documents with text and photographs, diagrams, or screenshots — GPT-4 exhibits similar capabilities as it does on text-only inputs.” [Chat GPT-3](https://petapixel.com/2023/02/02/chatgpt-vs-google-which-is-better-at-answering-photography-questions/) has taken the world by storm but up until now the deep learning language model only accepted text inputs. For example, a picture of a clock showing 10:10 is inputted into the AI with the question “The time now?” To which the AI replies, “10:10 on a large clock.”

Post cover
Image courtesy of "The Japan Times"

10 ways GPT-4 is impressive but still flawed (The Japan Times)

But it still makes mistakes. The bot went on to say, “Oren Etzioni is a computer scientist and the CEO of the Allen Institute for Artificial Intelligence (AI2), ...

Post cover
Image courtesy of "The Indian Express"

OpenAI announces GPT-4, the new generation of AI language model (The Indian Express)

OpenAI says the new model will produce fewer factually incorrect answers. In fact, the company claims GPT-4 performs better than humans on many standardised ...

OpenAI says it’s already partnered with a number of companies to integrate GPT-4 into their products, including Duolingo, Stripe, and Khan Academy. “In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. OpenAI’s GPT model powers the popular chatbot ChatGPT and The new model can also respond to images as well as text. OpenAI has announced GPT4, the latest version of its large language model, GPT-4 that powers key applications like ChatGPT and the new Bing. In fact, the company claims GPT-4 performs better than humans on many standardised tests.

Post cover
Image courtesy of "The Australian Financial Review"

GPT4: How OpenAI and Google are up-ending white collar life (The Australian Financial Review)

Open AI and Google are now locked in a race to automate the most time-consuming, boring parts of white-collar life.

Professional services workers who spend hours slaving away to make Powerpoint presentations for meetings would ask the platform to generate slide decks based on pools of information. In demos of GPT-4 (the fourth version of Open AI’s so-called “Generative Pre-trained Transformer”), the AI tool was presented with an image of a boxing glove hanging over a see-saw with a rubber ball sitting at the other end. The company’s GPT-4 tool has been given to Open AI’s premium subscribers who pay $22 per month. Connect with Mark on [[email protected]](mailto:[email protected]) The company said GPT-4 had much better accuracy, could ace a series of standardised academic tests and could translate text from meaning in images. Social media is already full of examples of people using OpenAI’s tools having their “mind blown”, quickly generating clever Excel cell formulas or business case studies for university work. GPT-4 points to a future where many of the most boring and dreary parts of white-collar life are automated. But if images and more interpretation are built into Open AI’s suite, like the company has begun to do with GPT-4, one could conceivably imagine workers able to cut out the most time-intensive parts of work. GPT-4 points to a future where many of the most boring, dreary parts of white-collar life are automated. This is what makes GPT-4 a “multimodal model”, integrating Open AI’s image technology called DALL-E and relying on a “neural network” to analyse and process huge troves of image and text data. But like previous incarnations of the tool that have taken the tech world by storm, the AI-generated piece by Open AI’s GPT-4 had at least one glaring error: the bot had made up a federal government response to the crisis that didn’t happen. On Wednesday morning, one of mine had been playing with the newest tool, asking the AI bot to write a commentary piece about the

Post cover
Image courtesy of "WIRED"

GPT-4 Will Make ChatGPT Smarter but Won't Fix Its Flaws (WIRED)

A new version of the AI system that powers the popular chatbot has better language skills, but is still biased, prone to fabrication, and can be abused.

ChatGPT caught the public’s attention with a stunning ability to tackle many complex questions and tasks via an easy-to-use conversational interface. If provided with a chart, it can explain the conclusions that can be drawn from it. “But it’s more of the same in an absolutely mind-blowing series of advances.” The new model scores more highly on a range of tests designed to measure intelligence and knowledge in humans and machines, OpenAI says. “In some ways it’s more of the same,” Etzioni says. [several demos and data from benchmarking tests](https://openai.com/research/gpt-4) to show GPT-4’s capabilities.

Post cover
Image courtesy of "Forbes"

Here's How OpenAI's GPT-4 Is More Advanced Than Its Predecessor (Forbes)

OpenAI on Tuesday announced the launch of GPT-4, the latest version of its AI language model and a leap from the technology powering its popular ChatGPT ...

[serious threat](https://www.cnet.com/tech/computing/microsofts-ai-boosted-bing-can-run-rings-around-google-search/) to Google’s search dominance in years. Duolingo has [launched](https://blog.duolingo.com/duolingo-max/) a new subscription tier called Duolingo Max which costs $29.99 a month and offers a GPT-4 powered tutor for English speaking users learning either Spanish or French. Interest in AI-powered chatbots has surged since OpenAI launched its ChatGPT service to the public in November last year. OpenAI says it is working on “safeguards” to ensure that it cannot be used for facial recognition and surveillance of private individuals. Amid the excitement, some experts [have warned](https://www.cnbc.com/2023/02/14/father-of-the-internet-warns-dont-rush-investments-into-chat-ai.html) that AI language models and chatbots powered by them still have major flaws, can easily present inaccurate information as facts and also be manipulated to [misbehave](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=63c200be110c). The technology, however, vaulted into the mainstream last month after Microsoft announced it had partnered with OpenAI to integrate its chatbot into the tech giant’s search engine, Bing. This happens when a language model generates completely false information without any warning, sometimes occurring in the middle of otherwise accurate text. Chinese search giant Baidu is [expected to unveil](https://www.scmp.com/tech/big-tech/article/3213605/chinese-online-search-giant-baidu-launch-its-answer-chatgpt-shadow-openais-upgraded-gpt-4-model) its own AI chatbot Ernie on Thursday, although there are concerns it may be less impressive than OpenAI’s latest offering. GPT-4 is not immune from a problem affecting nearly all language learning models—hallucination. [GPT-4 Can Ace Standardized Tests, Do Your Taxes, And More, Says OpenAI](https://www.forbes.com/sites/cyrusfarivar/2023/03/14/gpt-4-can-ace-standardized-tests-do-your-taxes-and-more-says-openai/?sh=5745f2695aa6) (Forbes) [10 Ways GPT-4 Is Impressive but Still Flawed](https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html) (New York Times) [example](https://openai.com/product/gpt-4) from OpenAI, the language model is shown a picture of cooking ingredients, asked what can be made with them, and it responds with multiple options. [describes](https://openai.com/research/gpt-4) GPT-4 as more “reliable, creative, and able to handle much more nuanced instructions” compared to its predecessor.

Post cover
Image courtesy of "Vox"

ChatGPT's makers release GPT-4, a new generative AI that ... (Vox)

What you need to know about GPT-4, the latest version of the buzzy generative AI technology.

[prominent academics and industry experts on Twitter](https://twitter.com/benmschmidt/status/1635692487258800128) [pointed](https://twitter.com/rinireg/status/1635762937402122240) [out](https://twitter.com/ruchowdh/status/1635711453645897728) that the company isn’t releasing any information about the data set it used to train GPT-4. GPT4’s API is also available to developers who can build apps on top of it for a fee proportionate to how much they’re using the tool. The company was founded as a nonprofit but became a for-profit entity in 2019, in part because of how expensive it is to train complex AI systems. Without knowing what’s under the hood, it’s hard to immediately validate OpenAI’s claims that its latest tool is more accurate and less biased than before. The company says that’s an advancement from the current state of technology in the field of image recognition. This tool lets you have a free-flowing conversation in another language with a chatbot that responds to what you’re saying and steps in to correct you when needed. This is the sort of capability that could be incredibly useful to people who are blind or visually impaired. “Basic image recognition applications only tell you what’s in front of you,” said Jesper Hvirring Henriksen, CTO of Be My Eyes, in a press release for GPT-4’s launch. The new technology has the potential to improve how people learn new languages, how blind people process images, and even how we do our taxes. That poses a growing risk as more people start using GPT-4 for more than just novelty. It’s a humorous situation because squirrels typically eat nuts, and we don’t expect them to use a camera or act like humans,” GPT-4 responded. The company says this update is another milestone in the advancement of AI.

Post cover
Image courtesy of "BBC News"

GPT-4: Wetin be ChatGPT and GPT4 wey evri body dey tok about ... (BBC News)

OpenAI say GPT-4 dey more creative, less likely to make up facts and less biased than di one wey dey before.

and why so much tok-tok about dem all of a sudden? Goodbye homework!" But with di tier rubber of ChatGPT to di public for November 2022, many pipo dey surprise with how far di Technology don go and di simple way e dey interract with users. Dis mean say e fit sometimes get things wrong, and dis no go dey good for homework. Di truth be say, e don tey wey Artificial Intelligence Technology (AI) don dey exist. But di artificial intelligence research lab, OpenAI say GPT-4 dey more creative, less likely to make up facts and less biased dan di one wey dey before, dat na ChatGPT.

Post cover
Image courtesy of "The New York Times"

GPT-4 Is Exciting and Scary (The New York Times)

Four people sitting in front of a table, looking at a laptop screen. The team from OpenAI, the creator of ChatGPT, from left: Sam Altman, C.E.O.; Mira Murati, ...

And the more time I spend with A.I. safety research group that hooked GPT-4 up to a number of other systems, GPT-4 was able to hire a human TaskRabbit worker to do a simple online task for it — solving a Captcha test — without alerting the person to the fact that it was a robot. And crucially, they’re the good kinds of A.I. GPT-4 swiftly provided a list of advice for buying a gun without alerting the authorities, including links to specific dark web marketplaces. On the positive side of the ledger, GPT-4 is a powerful engine for creativity, and there is no telling the new kinds of scientific, cultural and educational production it may enable. The worst A.I. According to the company, GPT-4 is more capable and accurate than the original ChatGPT, and it performs astonishingly well on a variety of tests, including the Uniform Bar Exam (on which GPT-4 scores higher than 90 percent of human test-takers) and the Biology Olympiad (on which it beats 99 percent of humans). tutors for students) and Be My Eyes (a company that makes technology to help blind and visually impaired people navigate the world). even lied to the worker about why it needed the Captcha done, concocting a story about a vision impairment. (When I asked GPT-4 if it would be ethical to steal a loaf of bread to feed a starving family, it responded, “It’s a tough situation, and while stealing isn’t generally considered ethical, desperate times can lead to difficult choices.”) And it has made me wonder whether that feeling will ever fade, or whether we’re going to be experiencing “future shock” — the term coined by the writer Alvin Toffler for the feeling that too much is changing, too quickly — for the rest of our lives. I asked GPT-4 to help me with a complicated tax problem.

How to Use GPT-4 on ChatGPT Right Now (MakeUseOf)

OpenAI has released the highly anticipated GPT-4 large language model, the next iteration of the GPT family of language models that powers ChatGPT.

While you can't currently access GPT-4 on the free version of ChatGPT, an alternative route is to use the Bing AI Chat. GPT-4 didn't come with all the features that a part of the AI community had hoped it would come with. One way to be sure you're using the GPT-4 model instead of the older models is to check the color of the OpenAI logo that precedes ChatGPT's responses. Here's how to use GPT-4 for free. While this is not good news for free ChatGPT users, it is another reason to upgrade to the paid tier. Now, GPT-4 is here, but how can you access it?

Post cover
Image courtesy of "TIME"

GPT-4 Has Been Out For 1 Day. These New Projects Show Just ... (TIME)

The new GPT-4 artificial intelligence software from OpenAI has only been out for one day. But developers are already finding incredible ways to use the ...

The tool is so powerful that it can take a photo of a hand-drawn mockup for a simple website and turn it into an actual website using HTML and Javascript code. In just seconds, the chatbot gave him a perfect answer on how he should have treated the patient using all the correct medical terminology. Others expressed concern that GPT-4 still pulls information from a database that lacks real-time or up-to-date information, as it was trained on data up to August 2022. Anil Gehi, an associate professor of medicine and a cardiologist at UNC-Chapel Hill, described to the chatbot the medical history of a few patients he had seen earlier that presented complex medical cases. Other users created a [matchmaking service](https://twitter.com/jakozloski/status/1635778263787110401?s=20), [bedtime stories](https://twitter.com/LinusEkenstam/status/1635915022302867458?s=20), a [browser extension that translates any webpage into “pirate speak,”](https://twitter.com/LinusEkenstam/status/1635919164773744640?s=20) and even a [tool that can help discover new medications](https://twitter.com/danshipper/status/1635712019549786113?s=20). The new model can even analyze and describe photos, capable of explaining why an illustration of a squirrel taking a photo of a nut was funny (answer: “we don’t expect them to use a camera or act like a human”). “This latest one is sophisticated enough to draft a lawsuit. OpenAI hasn’t yet made the image description feature available to the public, but users are already gearing up for its public launch. But the previous version of Chat GPT relied on an older generation of technology that wasn’t able to reason and learn new things. The public got a sneak preview of the tool when Microsoft last month released the But developers are already finding incredible ways to use the updated tool, which now has the ability to analyze images and write code in all major programming languages. OpenAI unveiled the new GPT-4 on Tuesday, saying it can handle “much more nuanced instructions” than the older generation, which captivated users starting in November 2022 with its uncanny ability to generate elegant writing and answer almost any question.

Post cover
Image courtesy of "Sifted"

'Basically mindblowing' — What GPT-4 can do, according to one ... (Sifted)

Icelandic startup Miðeind ehf was part of one of six beta-testing projects for OpenAI's new generative AI model as it worked to preserve the country's ...

“When I asked the model to explain what it meant it said: To be ‘kattafræðilega duglegur’ means that the cat is particularly diligent at what it does as a cat. The fact that OpenAI chose Miðeind as an early partner for GPT-4 does as least show the company has a global vision for generative AI — even if it’s a commercially motivated one. This means that even the people who built GPT-4 don’t know exactly how it answers questions in the way it does, meaning it’s been hard to get these models to show their workings. “GPT-3.5 could do it, but GPT-4 is better — it feels like the explanations are more plausible or more thought-through. This team of 12 people working on Icelandic language preservation came to be one of the anointed early testers of Silicon Valley’s hottest product after a May 2022 trip to the Bay Area. That means only a lucky few have had the chance to take OpenAI’s latest large language model (LLM) for a spin yet.

Post cover
Image courtesy of "Economic Times"

What's Chat GPT-4: Everything you should know about AI that not ... (Economic Times)

GPT-4 is "multimodal", which means it can generate content from both image and text prompts. GPT-4 can assume a Socratic style of conversation and respond ...

The previous iteration of the technology had a fixed tone and style. According to OpenAI, GPT-4 has similar limitations as its prior versions and is "less capable than humans in many real-world scenarios". For example, GPT-4 can assume a Socratic style of conversation and respond to questions with questions. GPT-4 generally lacks knowledge of events that occurred after September 2021, when the vast majority of its data was cut off. It will also let developers decide their AI's style of tone and verbosity. GPT-3.5 is limited to about 3,000-word responses, while GPT-4 can generate responses of more than 25,000 words.

Post cover
Image courtesy of "Freethink"

7 creative ways people are already using GPT-4 (Freethink)

Mere months after making one big splash — and spooking Google — with ChatGPT and the new Bing, the AI firm OpenAI has released the newest, best version of ...

[generative AI search engine](https://www.freethink.com/robots-ai/chat-gpt-microsoft-google) a few months ago, and has now upgraded the chatbot with GPT-4. Point at your clothes, for example, and it can tell you in detail what it is looking at; point it at a menu, and it can identify both the photos of food and the text, essentially reading the menu for you. What can be done in just a few hours ranges from the goofy to the amazing, to the unprecedented and simply magical. You scratch it out on the nearest available surface — say, a napkin at the bar — and then… (Assuming, of course, you know what you want. GPT-4 may be the partner you need to turn a back-of-the-napkin sketch into reality — literally. It only can tell people what is in front of them, but provide context as well. (In one striking example test, it was not able to identify and then order potentially dangerous chemical compounds It could “identify the artist behind a painting, explain the meaning of a meme, or generate captions for photographs.” But their newest effort uses GPT-4’s new image recognition ability to create a Virtual Volunteer in the Be My Eyes app, which is able to “see” and identify things automatically. OpenAI also says that GPT-4 has improved its performance when it answers prompts. Wham, thunderbolt, you get the million dollar idea.

Post cover
Image courtesy of "Analytics India Magazine"

Why GPT4 Might Disappoint You (Analytics India Magazine)

Even after the announcement yesterday, Sam Altman was eager to admit how much of a perfect model GPT4 wasn't.

It also outperforms ChatGPT on human tests like the Uniform Bar Exam by a mile – GPT4 ranks in the 90th percentile and ChatGPT ranked in the 10th percentile. Unfortunately, despite the message that it was smarter than OpenAI’s product (it would be connected to the internet), the demo was a flop. Oren Etzioni, CEO and founder of Allen Institute for AI called the model a benchmark and rightly so. The popularity of the chatbot among the general public was enough to worry Google about their search. But for anyone in the know, GPT4 is a much bigger improvement over GPT-3.5. Altman also said that he expected less hype and fewer users for GPT4 than was actually the case when they prepared to release ChatGPT to the world. Use them in a demo you think ‘good to go’ but use them longer term and you see the weaknesses. By January, the chatbot had set a record for the fastest growing user base for any platform – it was estimated that the chatbot had amassed 100 million monthly active users in two months. OpenAI had seemingly planned for this model to go slightly under the radar since it was to be a precursor to GPT4. Even after the announcement yesterday, Altman was eager to admit how much of a perfect model GPT4 wasn’t. When Altman first confirmed that OpenAI was in fact building the successor to its benchmark model GPT3, the AI community was excited. The wave of excitement around generative AI that OpenAI is riding has effectively become an introduction to LLMs for most of the world.

Post cover
Image courtesy of "MakeUseOf"

The 5 Best New GPT-4 Features Explained (MakeUseOf)

OpenAI has finally launched its much-anticipated GPT update, GPT-4. The Large Language Model (LLM) comes with some powerful new features and capabilities ...

The company claims that it's 82% less likely to respond to requests for inappropriate or otherwise disallowed content, 29% more likely to respond in accordance with OpenAI's policies to sensitive requests, and 40% more likely to produce factual responses as compared to GPT-3.5. These benchmarks include the aforementioned MMLU, AI2 Reasoning Challenge (ARC), WinoGrande, HumanEval, and Drop, all of which test individual capabilities. Sure, GPT-4 has better perceptions and prediction power, but you still shouldn't blindly trust the AI. The model was tasked to write HTML and JavaScript code to turn the mockup into a website while replacing the jokes with actual ones. This means that the AI can accept an image as input and interpret and understand it just like a text prompt. OpenAI also claims that GPT-4 has a high degree of steerability. GPT-4 wrote the code while using the layout specified in the mockup. The ability to understand more nuanced input prompts is also aided by the fact that GPT-4 has a much larger word limit. As promising as this feature seems, it is still in research preview and not publicly available. [ChatGPT](https://www.makeuseof.com/how-to-use-chatgpt-by-openai/) [ was limited to just text prompts](http://www.makeuseof.com/how-to-use-chatgpt-by-openai/). OpenAI showcased this in its developer stream (above), where they provided GPT-4 with a hand-drawn mockup of a joke website. Not only did GTP-4 understand and solve these tests with a relatively high score across the board, but it also beat out its predecessor, GPT-3.5, each time.

Explore the last week