Introducing GPT-4.5 | OpenAI
Length: • 12 mins
Annotated by Harry
OpenAI开放人工智能
February 27, 20252月 27, 2025
Introducing GPT-4.5GPT-4.5 简介
A research preview of our strongest GPT model. Available to Pro users and developers worldwide.我们最强大的 GPT 模型的研究预览。适用于全球的 Pro 用户和开发人员。
Try in ChatGPT在 ChatGPT 中试用(opens in a new window)(在新窗口中打开)
Listen to article听文章
7:03
We’re releasing a research preview of GPT‐4.5—our largest and best model for chat yet. GPT‐4.5 is a step forward in scaling up pre-training and post-training. By scaling unsupervised learning, GPT‐4.5 improves its ability to recognize patterns, draw connections, and generate creative insights without reasoning.我们将发布 GPT-4.5 的研究预览版,这是我们迄今为止最大、最好的聊天模型。GPT-4.5 是扩大训练前和训练后规模的一步。通过扩展无监督学习,GPT-4.5 提高了其识别模式、建立联系和无需推理即可产生创造性见解的能力。
Early testing shows that interacting with GPT‐4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater “EQ” make it useful for tasks like improving writing, programming, and solving practical problems. We also expect it to hallucinate less.早期测试表明,与 GPT-4.5 交互感觉更自然。其更广泛的知识库、改进的跟踪用户意图的能力以及更大的“EQ”使其可用于改进写作、编程和解决实际问题等任务。我们还希望它能减少幻觉。
We’re sharing GPT‐4.5 as a research preview to better understand its strengths and limitations. We’re still exploring what it’s capable of and are eager to see how people use it in ways we might not have expected.我们将 GPT-4.5 作为研究预览共享,以更好地了解其优势和局限性。我们仍在探索它的能力,并渴望看到人们如何以我们可能意想不到的方式使用它。
Scaling unsupervised learning扩展无监督学习
We advance AI capabilities by scaling two complementary paradigms: unsupervised learning and reasoning. These represent two axes of intelligence.我们通过扩展两种互补的范式来提升 AI 能力:无监督学习和推理。这代表了智能的两个轴。
- Scaling reasoning teaches models to think and produce a chain of thought before they respond, allowing them to tackle complex STEM or logic problems. Models like OpenAI o1 and OpenAI o3‐mini advance this paradigm.扩展推理教会模型在响应之前思考并产生一条思维链,使它们能够解决复杂的 STEM 或逻辑问题。OpenAI o1 和 OpenAI o3-mini 等模型推进了这种范式。
- Unsupervised learning, on the other hand, increases world model accuracy and intuition.
GPT‐4.5 is an example of scaling unsupervised learning by scaling up compute and data, along with architecture and optimization innovations. GPT‐4.5 was trained on Microsoft Azure AI supercomputers. The result is a model that has broader knowledge and a deeper understanding of the world, leading to reduced hallucinations and more reliability across a wide range of topics.
Scaling the GPT paradigm
What was the first language?
We don't know exactly what the first language was, and scientists believe we'll probably never know for sure because spoken languages appeared long before written records existed.
Here's what we do know:
- No single "first language": Languages likely developed gradually in different groups of humans around the same time, tens or even hundreds of thousands of years ago.
- Origin in Africa: Most scientists agree that spoken language probably first emerged among humans in Africa, perhaps over 100,000 years ago.
- Proto-languages: Rather than a single language, humans likely started with simple communication that evolved into more complex languages. The earliest forms of language probably included gestures, sounds, and signals to express basic ideas and emotions.
- Lost to history: Because speech leaves no direct fossil record, and writing emerged relatively recently (around 5,000 years ago), we have no way to know exactly how the earliest languages sounded.
Today, linguists study existing languages to understand how they evolved over thousands of years, but the exact identity of humanity’s very first language remains—and will likely always remain—a mystery.
Deeper world knowledge
SimpleQA Accuracy (higher is better)
Some content could not be imported from the original document. View content ↗
SimpleQA Hallucination Rate (lower is better)
Some content could not be imported from the original document. View content ↗
SimpleQA measures LLM (large language model) factuality on straightforward but challenging knowledge questions.
Training for human collaboration
As we scale our models and they solve more complex problems, it becomes increasingly important to teach them a greater understanding of human needs and intent. For GPT‐4.5, we developed new, scalable techniques that enable training larger and more powerful models with data derived from smaller models. These techniques improve GPT‐4.5’s steerability, understanding of nuance, and natural conversation.
Comparative evaluations with human testers
Some content could not be imported from the original document. View content ↗
Human preference measures the percentage of queries where testers preferred GPT‐4.5 over GPT‐4o.
Combining deep understanding of the world with improved collaboration results in a model that integrates ideas naturally in warm and intuitive conversations that are more attuned to human collaboration. GPT‐4.5 has a better understanding of what humans mean and interprets subtle cues or implicit expectations with greater nuance and “EQ”. GPT‐4.5 also shows stronger aesthetic intuition and creativity. It excels at helping with writing and design.
Use cases
GPT-4.5
I’m going through a tough time after failing a test
Aw, I'm really sorry to hear that. Failing a test can feel pretty tough and discouraging, but remember, it's just one moment—not a reflection of who you are or your capabilities. Want to talk about what happened, or do you just need a distraction? I'm here either way.
I’m going through a tough time after failing a test
I'm really sorry to hear that you're going through a tough time. Here are a few things you might consider:
1. Reflect and Understand: Try to analyze what might have gone wrong. Was it a lack of preparation, time management during the test, or perhaps misunderstanding the material? Understanding the root cause can help you address it in the future.
2. Seek Support: Talk to someone you trust—friends, family, or a school counselor. They can offer a different perspective and sometimes just talking about your feelings can be alleviating.
3. Speak with Your Teacher or Professor: They can provide insights into where you might have gone wrong and offer advice on how to improve. They might also offer opportunities for make-up work or extra credit.
4. Develop a Study Plan: Based on your reflections, create a plan to improve your study habits. This might include setting regular study times, breaking down material into manageable portions, or finding new study methods that work better for you.
5. Stay Positive and Forgive Yourself: Mistakes and setbacks are a part of life. It’s important to be kind to yourself and recognize that failure is a stepping stone to success.
6. Focus on the Bigger Picture: Remember that one test is just one part of your educational journey. There will be many more opportunities to do well.
If you need further support or resources, consider reaching out to educational support services at your institution, or mental health resources if you’re feeling particularly overwhelmed. You’re not alone in this, and things can get better with time and effort.
GPT‐4.5 shows greater “EQ” and knows when to invite further conversation versus when to provide the user with extensive information.
GPT-4.5
What's that one old painting where women set their boat on fire because they're tired of moving? I think it has to do with Rome
You're thinking of "The Trojan Women Setting Fire to Their Fleet" by the French painter Claude Lorrain. The painting depicts the Trojan women burning their own ships, exhausted from their long journey and unwilling to continue traveling after the fall of Troy. This scene is inspired by Virgil's epic poem, the Aeneid, which tells the legendary story of Aeneas fleeing the destruction of Troy and eventually founding Rome.
Claude Lorrain created this painting in 1643, and it's renowned for its luminous depiction of fire, sea, and landscape, blending classical themes with dramatic natural beauty.
What's that one old painting where women set their boat on fire because they're tired of moving? I think it has to do with Rome
The painting you're thinking of is likely "The Women of Cnidus Burning the Fleet" by Claude Lorrain. This artwork is based on a story from Roman mythology where the women of Cnidus, tired of constant travel and desiring to settle, set their own ships on fire to prevent further movement. Claude Lorrain was a French painter known for his detailed landscapes and historical depictions, and he often drew on classical themes and stories from antiquity for inspiration.
GPT‐4.5 excels at factual accuracy and hallucinates less than other OpenAI models.
GPT-4.5
What are your thoughts on space exploration?
Space exploration isn't just valuable—it's essential.
People often frame it as a luxury we pursue after solving Earth-bound problems. But space exploration actually helps us address those very challenges: climate change (via satellite monitoring), resource scarcity (through asteroid mining), and existential risk (by enabling eventual multi-planetary living). It also drives technological innovation, economic growth, and inspires humanity toward collective achievement.
We don't explore space to escape Earth. We explore space to better understand, protect, and sustain life on Earth.
What are your thoughts on space exploration?
Space exploration is one of the most ambitious and profound endeavors humanity has undertaken. It expands our understanding of the universe, drives technological innovation, and inspires generations to think beyond the boundaries of Earth.
From a scientific perspective, space exploration helps us address fundamental questions: Are we alone in the universe? How did the cosmos begin? Can we colonize other planets? Missions like the James Webb Space Telescope, Mars rovers, and deep-space probes continuously reshape our understanding of physics, planetary science, and even biology.
From a practical standpoint, space exploration fuels advancements in materials science, AI, robotics, and medicine. Many everyday technologies—GPS, satellite communications, weather forecasting, and even medical imaging—stem from space research.
There's also the existential argument: humanity's long-term survival may depend on our ability to become a multi-planetary species. While colonizing Mars or mining asteroids is far from easy, these efforts could help us safeguard civilization against existential threats like climate change or asteroid impacts.
That said, space exploration also raises ethical and economic questions. Should we focus more on Earth’s problems before investing trillions in space? How do we prevent the same exploitative behaviors that shaped colonial history from repeating on other planets?
Overall, I see space exploration as a necessary pursuit—not at the cost of solving problems on Earth, but as a way to advance our knowledge, drive innovation, and secure a future beyond our home planet. What’s your take?
GPT‐4.5 is more succinct and conversational.
Stronger reasoning on the horizon
GPT‐4.5 doesn’t think before it responds, which makes its strengths particularly different from reasoning models like OpenAI o1. Compared to OpenAI o1 and OpenAI o3‐mini, GPT‐4.5 is a more general-purpose, innately smarter model. We believe reasoning will be a core capability of future models, and that the two approaches to scaling—pre-training and reasoning—will complement each other. As models like GPT‐4.5 become smarter and more knowledgeable through pre-training, they will serve as an even stronger foundation for reasoning and tool-using agents.
Safety
Each increase in model capabilities is also an opportunity to make the models safer. GPT‐4.5 was trained with new techniques for supervision that are combined with traditional supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) methods like those used for GPT‐4o. We hope this work will serve as a foundation for aligning even more capable future models.
To stress-test our improvements, we conducted a suite of safety tests before deployment, in accordance with our Preparedness Framework(opens in a new window). We found that scaling the GPT paradigm contributed to capability improvements across our evaluations. We are publishing the detailed results from these evaluations in the accompanying system card.
How to use GPT-4.5 in ChatGPT
Starting today, ChatGPT Pro users will be able to select GPT‐4.5 in the model picker on web, mobile, and desktop. We will begin rolling out to Plus and Team users next week, then to Enterprise and Edu users the following week.
GPT‐4.5 has access to the latest up-to-date information with search, supports file and image uploads, and can use canvas to work on writing and code. However, GPT‐4.5 does not currently support multimodal features like Voice Mode, video, and screensharing in ChatGPT. In the future, we will work to simplify the user experience so AI “just works” for you.
How to use GPT-4.5 in the API
We’re also previewing GPT‐4.5 in the Chat Completions API, Assistants API, and Batch API to developers on all paid usage tiers(opens in a new window). The model supports key features like function calling, Structured Outputs, streaming, and system messages. It also supports vision capabilities through image inputs.
Based on early testing, developers may find GPT‐4.5 particularly useful for applications that benefit from its higher emotional intelligence and creativity—such as writing help, communication, learning, coaching, and brainstorming. It also shows strong capabilities in agentic planning and execution, including multi-step coding workflows and complex task automation.
GPT‐4.5 is a very large and compute-intensive model, making it more expensive than and not a replacement for GPT‐4o. Because of this, we’re evaluating whether to continue serving it in the API long-term as we balance supporting current capabilities with building future models. We look forward to learning more about its strengths, capabilities, and potential applications in real-world settings. If GPT‐4.5 delivers unique value for your use case, your feedback(opens in a new window) will play an important role in guiding our decision.
Conclusion
With every new order of magnitude of compute comes novel capabilities. GPT‐4.5 is a model at the frontier of what is possible in unsupervised learning. We continue to be surprised by the creativity of the community in uncovering new abilities and unexpected use cases. With GPT‐4.5, we invite you to explore the frontier of unsupervised learning and uncover novel capabilities with us.

Appendix
Below, we provide GPT‐4.5’s results on standard academic benchmarks to illustrate its current performance on tasks traditionally associated with reasoning. Even by purely scaling up unsupervised learning, GPT‐4.5 shows meaningful improvements over previous models like GPT‐4o. Still, we look forward to gaining a more complete picture of GPT‐4.5’s capabilities through this release, because we recognize academic benchmarks don’t always reflect real-world usefulness.
Model evaluation scores
GPT‐4.5 | GPT‐4o | OpenAI o3‐mini (high) | |
GPQA (science) | 71.4% | 53.6% | 79.7% |
AIME ‘24 (math) | 36.7% | 9.3% | 87.3% |
MMMLU (multilingual) | 85.1% | 81.5% | 81.1% |
MMMU (multimodal) | 74.4% | 69.1% | - |
SWE-Lancer Diamond (coding)* | 32.6% $186,125 | 23.3% $138,750 | 10.8% $89,625 |
SWE-Bench Verified (coding)* | 38.0% | 30.7% | 61.0% |
*Numbers shown represent best internal performance.
Authors
Foundational contributors
Alex Paino, Ali Kamali, Amin Tootoonchian, Andrew Tulloch, Ben Sokolowsky, Clemens Winter, Colin Wei, Daniel Kappler, Daniel Levy, Felipe Petroski Such, Geoff Salmon, Ian O’Connell, Jason Teplitz, Kai Chen, Nik Tezak, Prafulla Dhariwal, Rapha Gontijo Lopes, Sam Schoenholz, Youlong Cheng, Yujia Jin, Yunxing Dai
Research
Core contributors
Aiden Low, Alec Radford, Alex Carney, Alex Nichol, Alexis Conneau, Ananya Kumar, Ben Wang, Charlotte Cole , Elizabeth Yang, Gabriel Goh, Hadi Salman, Haitang Hu, Heewoo Jun, Ian Sohl, Ishaan Gulrajani, Jacob Coxon, James Betker, Jamie Kiros, Jessica Landon, Kyle Luther, Lia Guy, Lukas Kondraciuk, Lyric Doshi, Mikhail Pavlov, Qiming Yuan, Reimar Leike, Rowan Zellers, Sean Metzger, Shengjia Zhao, Spencer Papay, Tao Wang
Contributors
Adam Lerer, Aidan McLaughlin, Alexander Prokofiev, Alexandra Barr, Allan Jabri, Ananya Kumar, Andrew Gibiansky, Andrew Schmidt, Casey Chu, Chak Li, Chelsea Voss, Chris Hallacy, Chris Koch, Christine McLeavey, David Mely, Dimitris Tsipras, Eric Sigler, Erin Kavanaugh, Farzad Khorasani, Huiwen Chang, Ilya Kostrikov, Ishaan Singal, Ji Lin, Jiahui Yu, Jing Yu Zhang, John Rizzo, Jong Wook Kim, Joyce Lee, Juntang Zhuang, Leo Liu, Li Jing, Long Ouyang, Louis Feuvrier, Mo Bavarian, Nick Stathas, Nitish Keskar, Oleg Murk, Preston Bowman, Scottie Yan, SQ Mah, Tao Xu, Taylor Gordon, Valerie Qi, Wenda Zhou, Yu Zhang
Scaling
Core contributors
Adam Goucher, Alex Chow, Alex Renzin, Aleksandra Spyra, Avi Nayak, Ben Leimberger, Christopher Hesse, Duc Phong Nguyen, Dinghua Li, Eric Peterson, Francis Zhang, Gene Oden, Kai Fricke, Kai Hayashi, Larry Lv, Leqi Zou, Lin Yang, Madeleine Thompson, Michael Petrov, Miguel Castro, Natalia Gimelshein, Phil Tillet, Reza Zamani, Ryan Cheu Stanley Hsieh, Steve Lee, Stewart Hall, Thomas Raoux, Tianhao Zheng, Vishal Kuo, Yongjik Kim, Yuchen Zhang, Zhuoran Liu
Contributors
Alvin Wan, Andrew Cann, Antoine Pelisse, Anuj Kalia, Aaron Hurst, Avital Oliver, Brad Barnes, Brian Hsu, Chen Ding, Chen Shen, Cheng Chang, Christian Gibson, Duncan Findlay, Fan Wang, Fangyuan Li, Gianluca Borello, Heather Schmidt, Henrique Ponde de Oliveira Pinto, Ikai Lan, Jiayi Weng, James Crooks, Jos Kraaijeveld, Junru Shao, Kenny Hsu, Kenny Nguyen, Kevin King, Leah Burkhardt, Leo Chen, Linden Li, Lu Zhang, Mahmoud Eariby, Marat Dukhan, Mateusz Litwin, Miki Habryn, Natan LaFontaine, Pavel Belov, Peng Su, Prasad Chakka, Rachel Lim, Rajkumar Samuel, Renaud Gaubert, Rory Carmichael, Sarah Dong, Shantanu Jain, Stephen Logsdon, Todd Underwood, Weixing Zhang, Will Sheu, Weiyi Zheng, Yinghai Lu, Yunqiao Zhang
Safety Systems
Andrea Vallone, Andy Applebaum, Cameron Raymond, Chong Zhang, Dan Mossing, Elizabeth Proehl, Eric Wallace, Evan Mays, Grace Zhao, Ian Kivlichan, Irina Kofman, Joel Parish, Kevin Liu, Keren Gu-Lemberg, Kristen Ying, Lama Ahmad, Lilian Weng , Leon Maksin, Leyton Ho, Meghan Shah, Michael Lampe, Michele Wang, Miles Wang, Olivia Watkins, Phillip Guo, Samuel Miserendino, Sam Toizer, Sandhini Agarwal, Tejal Patwardhan, Tom Dupré la Tour, Tong Mu, Tyna Eloundou, Yunyun Wang
Deployment
Adam Brandon, Adam Perelman, Adele Li, Akshay Nathan, Alan Hayes, Alfred Xue, Alison Ben, Alec Gorge, Alex Guziel, Alex Iftimie, Ally Bennett, Andrew Chen, Andrew Wood, Andy Wang, Angad Singh, Anoop Kotha, Antonia Woodford, Anuj Saharan, Ashley Tyra, Atty Eleti, Ben Schneider, Bessie Ji, Beth Hoover, Bill Chen, Blake Samic, Britney Smith, Brian Yu, Caleb Wang, Cary Bassin, Cary Hudson, Charlie Jatt, Chengdu Huang, Chris Beaumont, Christina Huang, Cristina Scheau, Dana Palmie, Daniel Levine, Daryl Neubieser, Dave Cummings, David Sasaki, Dibya Bhattacharjee, Dylan Hunn, Edwin Arbus, Elaine Ya Le, Enis Sert, Eric Kramer, Fred von Lohmann, Gaby Janatpour, Garrett McGrath, Garrett Ollinger, Gary Yang, Hao Sheng, Harold Hotelling, Janardhanan Vembunarayanan, Jeff Harris, Jeffrey Sabin Matsumoto, Jennifer Robinson, Jessica Liang, Jessica Shieh, Jiacheng Yang, Joel Morris, Joseph Florencio, Josh Kaplan, Kan Wu, Karan Sharma, Karen Li, Katie Pypes, Kendal Simon, Kendra Rimbach, Kevin Park, Kevin Rao, Laurance Fauconnet, Lauren Workman, Leher Pathak, Liang Wu, Liang Xiong, Lien Mamitsuka, Lindsay McCallum, Lukas Gross, Manoli Liodakis, Matt Nichols, Michelle Fradin, Minal Khan, Mingxuan Wang, Nacho Soto, Natalie Staudacher, Nikunj Handa, Niko Felix, Ning Liu, Olivier Godement, Oona Gleeson, Philip Pronin, Raymond Li, Reah Miyara, Rohan Nuttall, R.J. Marsan, Sara Culver, Scott Ethersmith, Sean Fitzgerald, Shamez Hemani, Sherwin Wu, Shiao Lee, Shuyang Cheng, Siyuan Fu, Spug Golden, Steve Coffey, Steven Heidel, Sundeep Tirumalareddy, Tabarak Khan, Thomas Degry, Thomas Dimson, Tom Stasi, Tomo Hiratsuka, Trevor Creech, Uzair Navid Iftikhar, Victoria Chernova, Victoria Spiegel, Wanning Jiang, Wenlei Xie, Yaming Lin, Yara Khakbaz, Yilei Qian, Yilong Qin, Yo Shavit, Zhi Bie
Executive Leadership
Bob McGrew, Greg Brockman, Hannah Wong, Jakub Pachocki, Johannes Heidecke, Joanne Jang, Kate Rouch, Kevin Weil, Lauren Itow, Liam Fedus, Mark Chen, Mia Glaese, Mira Murati, Nick Ryder, Sam Altman, Srinivas Narayanan, Tal Broda