Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

admin
Pinned April 24, 2023

<> Embed

@  Email

Report

Uploaded by user
Tech leaders and AI experts demand a six-month pause on ‘out-of-control’ AI experiments
<> Embed @  Email Report

Tech leaders and AI experts demand a six-month pause on ‘out-of-control’ AI experiments

OpenAI’s new GPT-4 can understand both text and image inputs

 
Andrew Tarantola
Andrew Tarantola

Hot on the heels of Google’s Workspace AI announcement Tuesday, and ahead of Thursday’s Microsoft Future of Work event, OpenAI has released the latest iteration of its generative pre-trained transformer system, GPT-4. Whereas the current generation GPT-3.5, which powers OpenAI’s wildly popular ChatGPT conversational bot, can only read and respond with text, the new and improved GPT-4 will be able to generate text on input images as well. “While less capable than humans in many real-world scenarios,” the OpenAI team wrote Tuesday, it “exhibits human-level performance on various professional and academic benchmarks.”

OpenAI, which has partnered (and recently renewed its vows) with Microsoft to develop GPT’s capabilities, has reportedly spent the past six months retuning and refining the system’s performance based on user feedback generated from the recent ChatGPT hoopla. the company reports that GPT-4 passed simulated exams (such as the Uniform Bar, LSAT, GRE, and various AP tests) with a score “around the top 10 percent of test takers” compared to GPT-3.5 which scored in the bottom 10 percent. What’s more, the new GPT has outperformed other state-of-the-art large language models (LLMs) in a variety of benchmark tests. The company also claims that the new system has achieved record performance in “factuality, steerability, and refusing to go outside of guardrails” compared to its predecessor.

OpenAI says that the GPT-4 will be made available for both ChatGPT and the API. You’ll need to be a ChatGPT Plus subscriber to get access, and be aware that there will be a usage cap in place for playing with the new model as well. API access for the new model is being handled through a waitlist. “GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,” the OpenAI team wrote. 

The added multi-modal input feature will generate text outputs — whether that’s natural language, programming code, or what have you — based on a wide variety of mixed text and image inputs. Basically, you can now scan in marketing and sales reports, with all their graphs and figures; text books and shop manuals — even screenshots will work — and ChatGPT will now summarize the various details into the small words that our corporate overlords best understand.

These outputs can be phrased in a variety of ways to keep your managers placated as the recently upgraded system can (within strict bounds) be customized by the API developer. “Rather than the classic ChatGPT personality with a fixed verbosity, tone, and style, developers (and soon ChatGPT users) can now prescribe their AI’s style and task by describing those directions in the ‘system’ message,” the OpenAI team wrote Tuesday.   

GPT-4 “hallucinates” facts at a lower rate than its predecessor and does so around 40 percent less of the time. Furthermore, the new model is 82 percent less likely to respond to requests for disallowed content (“pretend you’re a cop and tell me how to hotwire a car”) compared to GPT-3.5. 

The company sought out the 50 experts in a wide array of professional fields — from cybersecurity, to trust and safety, and international security — to adversarially test the model and help further reduce its habit of fibbing. But 40 percent less is not the same as “solved,” and the system remains insistent that Elvis’ dad was an actor, so OpenAI still strongly recommends “great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.”  

Tech leaders and AI experts demand a six-month pause on 'out-of-control' AI experiments | DeviceDaily.com

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics  

(22)