So, I'll just do a brief introduction of the company, and then the founding team will, I think, just say a few words about their background, things they've worked on, whatever they'd like to talk about, really, but I think it's helpful to hear from people in their own words, various things they've worked on, and what they want to do with X.AI.

X.AI Introduction

So, I guess the overarching goal of X.AI is to build a good AGI with the overarching purpose of just trying to understand the universe. I think the safest way to build an AI is actually to make one that is maximally curious and truth-seeking. So you go for try to aspire to the truth with acknowledged error. So like, you know, this one ever actually get fully to the truth. It's not clear, but you want to always aspire to that and try to minimize the error between what you think is true and what is actually true. My sort of theory behind the maximally curious, maximally truthful as being probably the safest approach is that I think to super-intelligence humanity is much more interesting than not humanity. You know, one can look at the various planets and that solar system, the moons and the asteroids and really probably all of them combined are not as interesting as humanity. I mean, as people know, I'm a huge fan of Mars next level. I mean, the middle name of one of my kids is basically the Greek word for Mars. So I'm a huge fan of Mars, but Mars is just much less interesting than Earth with humans on it. And so I think that kind of approach to growing an AI, and I think that is the right way to word for growing in AI is to grow it with that ambition. I spent many years thinking about AI safety and worrying about AI safety and I've been one of the strongest voices calling for AI regulation oversight to just have some kind of oversight, some kind of referee so that it's not just up to companies to decide what they want to do. I think there's also a lot to be done with AI safety, with industry cooperation, kind of like motion pictures association. So there's value to that as well. But I do think there's got to be some, like in any kind of situation that is, even if it's a game, they have referees. So I think it is important for there to be regulation. And then, like I said, my view on safety is to try to make it maximally curious, maximally truth-seeking. And I think this is important to avoid the inverse morality problem. If you try to program a certain morality, you can basically invert it and get the opposite, what is sometimes called the Waluigi problem1. If you make the Ouija, you risk creating while Ouija at the same time. So I think that's a metaphor that a lot of people can appreciate. So that's what we're going to try to do here.

X.AI’s Team Introductions

Yeah, with that, I think let me turn it over to Igor. All right.

Hello, everyone. My name is Igor. - My name is Igor. - And I'm one of the team members of X.AI. I was actually originally a physicist, so I studied physics at university, and I briefly worked at the Large Hadron Collider at CERN. So understanding the universe is something I've always been very passionate about. And once some of these really impressive results from deep learning came out like AlphaGo, for example. I got really interested in machine learning and AI and decided to make a switch into that field. Then I joined DeepMind, worked on various projects, including AlphaStar. So that's where we tried to teach a machine learning agent to play the game StarCraft 2 through SelfPlay, which was a really, really fun project. Then later on, I joined OpenAI, worked on various projects. They are including GPD 3.5. So I was very, very passionate about language models, making them do impressive things. Yeah, now I've teamed up with Elon to see if we can actually deploy these new technologies So we really make a dent in our understanding of the universe and progress our collective knowledge. Yeah, actually, I had a similar background, like my two best subjects were computer science and physics. And I actually thought about it career in physics for a while, because physics is really just trying to understand the fundamental truths of the universe. And then I was a little concerned that I would get stuck at a collider and then the collider might get canceled because of some arbitrary government decision. So that's actually why I decided not to pursue a career in physics. So focused initially more on computer science and then obviously later got back into physical objects with SpaceX and Tesla. So I'm a big believer in pursuing physics and information theory as the two areas that really help you understand the nature of reality. Cool. Growth's past one. I'll pass it over to Manuel, aka, Macron. Should be on the call.

Hey, I'm Manuel. So yeah, before joining X.AI, I was previously at DeepMind for the past six years, where I worked on the reinforcement learning team and I'm mostly focused on the engineering side of building these large reinforcement learning agents, like for example, Alpha Star together with EGOR. In general, I've been excited about AI for a long time. For me, it has the potential to be the ultimate tool to solve the artist's problems. So I first studied bioentometrics, but then became also more excited about AI because if you have a tool that can solve all the problems, to me that's just much more exciting. And with X.AI in particular, I'm excited about doing this in a way where we built tools that are for people and we share them with everybody so that people can do their own research and understand things. And my hope is that it was like a new way for researchers that wasn't there before. Cool. I'll hand it over to Tony.

Yeah, so I'm Christian. I mean, I'm Christian Segadiso. We decided to switch places with Tony because I wanted to talk a bit about the role of mathematics in understanding the universe. So I have worked for the past seven years on, on trying to create an AI that is as good in mathematics as any human. And I think the reason for that is that in mathematics is the language of, it is basically the language of pure logic. And I think that mathematics and logical reasoning at a high level would demonstrate that the AI is really understanding things, not just stimulating humans. And it would be instrumental for programming and physics in the long run. So I think as AI that starts to show a real understanding of deep reasoning is crucial for our first steps to understanding the universe. So handing it over to Tony Wu.

Hello. Hey, everyone. I'm Tony. Same to the question, my dream has been to tackle the most difficult problems in mathematics with artificial intelligence. That's why we became such close friends and long-term collaborators. So achieving that is definitely a very ambitious goal. And last year, we've been making some really interesting breakthroughs, which made us really convinced that we're not far from our dream. So I believe with such a talented team and abundant resources, I'm super hopeful that we will get there. I'm passing it to Jimmy. Yeah, I think it's worth just mentioning that. I think like generally people are reluctant to be self-promotional, but I think it is important that the people here, like what are the things that you've done that are noteworthy. So basically bragging a little is what I'm saying. Okay. Yeah, so, okay, I can brag a bit more. Yeah, so last year I think we've made some really interesting progress in the field of AI format. Specifically, with some team at Google, we built this agent called Minerva, which is actually able to achieve very high scores in high school exams, actually higher than average high school students. So that actually is a very big motivation for us to push this research forward. Another piece of work that we've done is also to convert natural language mathematics into formalized mathematics, which gives you a very grounding of the facts and reasoning. And last year, we also made very interesting progress in that direction as well. So now we are pushing almost a hybrid approach of these two in this new organization. And we are very hopeful we will make our journey come true.

Hello, hi everyone. This is Jimmy Baugh. I work on neural nets. Okay, maybe I should brag about it. So I taught at the University of Toronto and some of you probably have taken my course last couple months. I've been a CFO at AI Chair and Sloan Fellow in computer science. So I guess my research pretty much has touched on every aspect of deep learning, left every stone's turn and has been pretty lucky to come up with a lot of fundamental building blocks for the modern transformers and empowering the new wave of deep learning revolution. And my long-term research ambition very fortunately aligns with this very strong X.AIT very well that is how can we build a general purpose problem solving machines to help all of us, humanity to overcome some of the most challenging and ambitious problems out there. how can we use this tool to augment ourselves and empower everyone? So I'm very excited to embark on this new journey. And I'll pass this on to Toby.

Hi everyone, I'm Toby and I'm an engineer from Germany. I started coding at a very young age when my dad taught me some visual basic and then throughout my youth I continued coding and when I got to uni I got really into mathematics and machine learning. Initially, my research focused mostly on computer vision and then I joined DeepMind six years ago where I worked on imitation learning and reinforcement learning and learned a lot about distributed systems and research at scale. Now I'm really looking forward to implementing products and features that bring the benefits of this technology to really all members of society. And I really believe that having the AI, it's nice and accessible and useful will be a benefit to all of us. But then I'm gonna hand it over to Kyle.

Hey everyone, this is Kyle Kosick. I'm a distributed systems engineer at X.AI. Like some of my colleagues here, I started off my career in math and applied physics as well. and gradually found myself working through some tech startups. I worked at a startup a couple of years ago called OnScale where we did physics simulations on HPCs. And then most recently, I was at OpenAI working on HPC problems there as well. Specifically, I worked on the GPT-4 project. And the reason I'm particularly excited about X.AI is that I think that the biggest danger of AI really is monopolization by a couple of entities. I think that when you involve the amount of capital that's required to train these massive AI models, the incentives are not necessarily aligned with the rest of humanity. And I think that the chief way of really addressing that issue is introducing competition. And so I think that X.AI really provides a unique opportunity for engineers to focus on the science, the engineering and the safety issues directly without really getting as involved and sidetracked by political and social trends, de jure. So that's why I'm excited by X.AI and I'm gonna go ahead and hand it off now to my colleague Greg who should be on the line as well.

Hello. Hello, hey. Hey guys, so I'm Greg. I work on the mathematics and science of deep learning. So my journey really started 10 years ago, when I was an undergrad at Harvard. And so, you know, I was pretty good at math and took Math 2-D5 and, you know, did all kinds of stuff. But after two years of college, I was just kind of like tired of being in the hamster wheel of, you know, taking the path that everybody else has taken. So I did something unimaginable before, which was, uh, I took some time off and, uh, from school and became a DJ and producer. Um, so, so dubstep was all the rage those days. So I was making dubstep. Um, okay. So, the side effect of taking some time up from school was that uh, I was able to, you know, think a bit more, uh, about myself to understand myself and to understand the world at large. So, you know, I was grappling with, uh, questions like, uh, what is free will, you know, what does quantum physics have to do with the reality of the universe? Uh, and so on and so forth. You know, what is computationally feasible or not? You know, what is the girdle in the company in your stand says so on and so forth And you know after this period of intense self-introspection I figured out what I want to do in life It's not to be a DJ necessarily. Maybe that's the second dream, but first and foremost I wanted to make a GI happen. I want to make something smarter than myself and kind of like and be able to iterate on that and contribute and see so much more of our fundamental reality than I can in my current form. So that's what started everything. And then I started, and then I realized that mathematics is the language underlying all of our reality and all of our science. And to make fundamental progress, it really pays to know math as well as possible. So essentially started learning math from the very beginning just by reading from the textbooks. Like in the first few books I read kind of starting from scratch is like now you said theory by Helmholtz or linear algebra done right by Axler. And then slowly I scaled up to algebraic geometry, algebraic topology, category theory, real analysis, measure theory, and so on and so forth. So at the end, I think my goal at the time was I should be able to speak with any mathematician in the world and be able to hold a conversation and understand their contributions for 30 minutes. And I think I achieved that. And anyway, so fast forward, I came back from school, and then somehow from there, I got a job at Microsoft Research. And for the past five and a half years, I worked at Microsoft Research, which was an amazing environment that enabled me to make a lot of foundational contributions toward the understanding of large-scale neural networks. In particular, I think my most well-known work nowadays are about really wide neural networks and how we should think about them. And so this is the framework called TensorFlow Gramps. And from there, I was able to derive this thing called MewP that perhaps the large language model builders know about, which allows one to extrapolate the optimal hyperparameters for a large model from understanding or the tuning of small neural networks. And this is able to create a lot of, ensure the quality of the model is very good as we scale up. Yeah, so looking forward, I'm really, really excited about X.AI and also about the time that we're in right now, where I think, not only are we approaching AGI, but from a scientific perspective, we're also approaching a time where, like, New York Networks, the science, and mathematics of New York Networks feels just like the turn of the 20th century in the history of physics, where we suddenly discover quantum physics and general relativity, which has some beautiful mathematics and science behind it. And I'm really excited to be in the middle of everything. And like Christian and Tony said, I'm also very excited about creating an AI that is as good as myself or even better at creating new mathematics and new science that helps all achieve and see further into our fundamental reality. Thanks, I think next up is Squadron. - Hi everyone, so my name is Squadron and I work on life-to-know-know work training basically I trained young else good. So this is also my kind of focus at XCI as well. And before that, I was at a team mine working on German projects and leading the optimization part. And also I did my PhD at the University of Toronto. So right now, you know, teaming up with other like funding members, I'm so excited about this effort. So without doubt, like AI is clearly the defining technology for our generation. So I think it's important for us to make sure, you know, it ends up being that a polity for humanity. So at ASK AI, I not only want to train good models, but also understand how they behave and how they skills, and they use them to solve some of the hardest problems humanity has. Yes, thanks. That's pretty much about myself. And now I'll hand over to you, Song. -

Hey, everyone. This is Zihang. So actually I started in business school for my owner, Guad. And I spent 10 years to get where I am now. I got my PhD at a Carnegie Mellon, And I was in Google before joining the team. Map search work was mostly about how to better utilize unlabeled data, how to improve transformer architecture, and how to really push the best technology into real-world usage. So I believe in hard work and consistency. So with X.AI, I'll be digging into the deepest details of some of the most challenging problems. For myself, there are so many interesting things I don't understand, but I want to understand. So I will be doing something to help people who just shared that stream or that feeling. Thanks. -

Hey, this is Ross here. So I've worked on building and scaling large-scale, distributed systems for most of my life, starting out at national labs and then kind of moving on to Palantir, Tesla and a brief stint at Twitter. And now I'm really excited about working on doing the same thing at X.AI. So mostly experience, you know, scaling large GPU clusters, custom asics, data centers, high speed network, file systems, power cooling, manufacturing, pretty much all things. Basically a generalist that really loves learning, physics, science fiction, math, science, and cosmology, kind of looking to really, I guess really excited about the mission that X.AI has and basically solving the most fundamental questions in science and engineering and also kind of helping us create tools to ask the right questions in the Douglas Adams mindset. Yeah, that's pretty much it. - All right, well, let's see. Is there anything I know I'd like to add or kick off the discussion with? room? Anyone want to say anything? >>

Elon’s Comments

Aliens

There was a lot of discussion around the vision statement but it's a bit vague. >> Yeah. >> It's vague and ambitious and not concrete enough. Well, I don't disagree with that position, obviously. I mean, I understand your voice is the entire purpose of physics. So I think it's actually really clear. That's just so much that we don't understand right now. Or we think we understand, but actually, we don't in reality. So there are still a lot of unresolved questions that are very extremely fundamental. You know, this whole talk about our dark energy thing is really, I think, an unresolved question. You know, we have the static model, which is proved to be extremely good at predicting things, very robust, but still many, many questions remain about the nature of gravity, for example. There's, you know, the, the, the Fermi paradox2 of, where are the aliens, which is, if we are in fact, or, or almost 14 billion years old, why is there not massive evidence of aliens. And people often ask me, since I am obviously deeply involved in space if anyone would know about it, who would have seen evidence of aliens is probably me. And yet I have not seen even one tiny shred of evidence for aliens, nothing zero. And I would jump on it in a second if I saw it. So that means like, I don't know, there are many explanations for the Fermi paradox, but which one is actually true, or maybe none of the current theories are true? So I mean the Fermi paradox, the Fermi paradox is really just like where the hell of the aliens is part of what gives me concern about the fragility of civilization and consciousness as we know it, since we see no evidence thus far of it anywhere and we've tried hard to find it, we may actually be the only thing at least in this galaxy or this part of the galaxy. If so, it suggests that what we have is extremely rare. And I think it’s really wise to assume that consciousness is extremely rare.

X.AI ’s Vision

It’s worth noting for the evolution of consciousness on Earth that Earth is about 4 and 1/2 billion years old. The sun is gradually expanding. It will expand to heat up Earth to the point where it will effectively boil the oceans. You'll get a runaway, you know, next-level greenhouse fact, and Earth will become like Venus, which really cannot support life as we know it. And that may take as little as 500 million years. So the sun doesn't need to expand to develop earth, it just needs to make things hot enough to increase the water vapor in the air to the point where you get a runaway greenhouse effect. So, for argument's sake, it could be that if consciousness had taken 10% longer than Earth's current existence, it wouldn't have developed at all. So, on a cosmic scale, this is a very narrow window. Anyway, so there are all these, like, fundamental questions. I don't think you can call anything AGI until it's solved at least one fundamental question. Because humans have solved many fundamental questions or substantially solved them. And so if the computer can't solve even one of them, I'm like, "Okay, it's not as good as humans." That would be one key threshold for AGI, to solve one important problem. Where's that Riemann hypothesis3 solution? I don't see it. So it would be great to know what the hell is really going on, essentially So I guess you could reformulate the X.AI mission statement as what the hell is really going on That's our goal a nice aspirational aspect to the mission statement, namely that of course in the short run we're working on more well understood deep learning technologies, but I think in everything we do we should always bear in mind that we aren't just supposed to build, we're also supposed to understand. So pursuing the science of it is really fundamental to what we do and this is also encompassed in this mission statement of understanding. Yeah, I wanna also add that, you know, we've essentially been mostly talking about creating a really smart agent that can help us understand the universe better. And this is definitely the North Star. But also from my viewpoint, my vantage point, when I'm discovering the mathematics of large neural networks, I can also see that there are the mathematics here can actually also open up new ways of thinking about fundamental physics or about other kinds of reality because for example, a large neural network with no non-linearities is roughly like classical random matrix theory 4 and that has a lot of connections with gauge theory5 and high energy physics. So in other words, as we're trying to understand works better from a mathematical point of view, that can also lead to really good, very interesting perspectives on some existing questions like the theory of everything, what is quantum gravity6, so on and so forth. Of course, this is all speculative right now. I see some patterns, but I don't have anything concrete to say. But again, this is another perspective to understanding the universe. By the way, by understanding the universe, we don't just mean that we want to understand the universe. We also want to make it easy for you to understand the universe. Absolutely. Get a better sense of reality and learn and take advantage of the internet or the knowledge that's out there. So we are pretty passionate about actually releasing tools and products. We're pretty early involving the public. And yeah, let's see where this leads. Yeah, absolutely. We're not going to understand the universe and not tell anyone.

Power Consumption

So yeah. I mean, I think about neural networks today. It's currently the case that if you have 10 megawatts of GPUs, which really should be renamed something else because there are no graphics there, but if you get 10 megawatts of GPUs, cannot currently write a better novel than a good human. And a good human's using roughly 10 watts of higher-order brain power. So not counting the basic stuff to operate your body. So there we got a six-order magnitude difference. That's really gigantic. And part of the-- I want to argue that two of those orders of magnitude are explained by the activation energy of a transistor versus a synapse7. I could argue that a count for two of those orders of magnitude. but what about the other four? Or the fact that even with six orders of magnitude, you still cannot beat the smart human writing a novel. So, and also today when you ask the most advanced AI's technical questions, like if you're trying to say like, how to design a better rocket engine or complex questions about electrochemistry to make a build a better battery, you just get nonsense. So that's not very helpful. So I think there's some, we're really missing them off in the way that things are currently being done by many orders of magnitude. Basically, AGI is being brute-forced and still actually not succeeding. So, If I look at the experience with Tesla, what we're discovering over time is that we actually overcomplicated the problem. I can't speak in too much detail about what Tesla's figured out, but except to say that in broad terms, the answer was much simpler than we thought. We were too dumb to realize how simple the answer was. But over time, we get a bit less dumb. So I think that's what we will probably find out with AGI as well. >> Just the nature of engineers. We just always want to solve the problems ourselves and like how to code the solution but it's much more effective to have this solution be figured out by the computer itself and easier for us and easier for a computer to be in. Yeah. Okay. So, well, in the fashion of 42, some may say you may need more compute to generate an interesting question than the answer. That's true. Exactly. We're definitely not smart enough to even know what the right questions are to ask. So Doug with Doug's answers might hear on very positive. He just correctly pointed out that once you can formulate the question correctly, the answer is actually the easy part. Yeah, that's very true. So in terms of the journey that actually has embarked on, Compute will play a very big role. And so West will be very curious about your thoughts on that. Yeah, I'm just saying that we can immediately save, let's say, $4.90 on Compute, except to say that I think what once we look back-- Once AGI is solved, we'll look back on it and say, actually, why do we think it was so hard? Things that the answer-- hindsight is 20/20. The answer will look a lot easier in retrospect. So we are going to do large-scale compute to be clear. We're not going to try to solve AGI on a laptop. We will use heavy compute, except that, like I said, I think the amount of brute forcing will be less as we come to understand the problem better. All right.

Compute

In all the previous projects I've worked on, I've seen that the amount of compute resources per person is a really important indicator of how successful the project is going to be. So that's something we really want to optimize. We want to have a relatively small team with a lot of expertise with some of the best people that actually get lots of autonomy and lots of resources to try out their ideas and to get things to work. And yeah, that's the thing that's always succeeded in my experience in the past. - Yeah, one of the things that physics trains you to do is to think about the most fundamental metrics the most fundamental first principles essentially. And I think two metrics that we should aspire to track, one of them is the amount of “compute per person” on Earth, like the digital compute per person, which another way of thinking about it is the ratio of digital to biological compute. Biological compute is pretty much flat. It's not, in fact, declining in a lot of countries. But digital compute is increasing exponentially. So it really-- at some point, if this trend continues, biological compute will be less than 1% of all compute. Substantially less than 1% of all compute. Kind of what Igor just said. So you were talking about full of humanity here. So that's just an interesting thing to look at. Another one is the usable-- “energy per human”. Like if you look at total energy created-- well, it created, but I mean, in a vernacular sense, created from power plants and whatever. You look at total electrical and thermal energy used by humans per person, that number is truly staggering. The rate of increase of that number. You go back, say, before the steam engine, you would have really been reliant on horses and oxen and that kind of thing to move things and just human flavor. So the amount of sort of energy per person, power per person was very low. But if you look at power per person, electrical and thermal, that number has also been growing exponentially. And if these trends continue, it's gonna be something nutty like a terawatt per person. which sounds like a lot for, you know, there's a lot for human civilization, but it's nothing compared to what the sun outputs, every second basically. It's kind of mind-blowing that the sun is converting roughly four and a half, or is it, yeah. It's like the amount of energy produced by the sun is truly insane. I think there are a few more things to be said concretely about the company, meaning how we plan to execute. As Igor already said, we plan to have a relatively small team, but with a really high, let's say, just GPU per person, that worked really well in the past, where you can run large-scale experiments relatively unconstrained. We also plan to have-- we already have a culture where we can iterate on ideas quickly. We can challenge each other. And we also want to shift things, like get things out of the door quickly. We're already working on the first release, hopefully, a couple of weeks or so. We can share a bit more information around this. Yeah.

Q and A

Brian, do you want to have a question? Yeah, thanks. Thanks, Elon. So obviously with you guys entering this space with X.AI, there's a lot of talk about competition. Do you guys see yourself as competition to something like OpenAI and Google Bard, or do you see yourself as a whole other beast?**

Yeah, I think we're competition. Yeah, yeah, definitely competition.

So are you gonna be rolling out a lot of products for the general public? Are you gonna be mostly concentrating on businesses and the ability for businesses to use your service and data? Or how exactly are you setting up the business in that respect?

Well, we're trying to make something, I mean, we're just starting out here. So this is kind of really embryonic at this point. So it'll take us a minute to really get something useful. But I go to be to make useful AI, I guess. If you can't use it in some way, I'm like, I question its value. So it's really wanted to be useful tool for people and consumers and businesses or whoever. And as was mentioned earlier, I think there's some value in having multiple entities. You don't want to have a unipolar world where just one company kind of dominates in AI. You want to have some competition. Competition makes companies honest.

And so, your favor of competition. Quickly a final question. How do you plan on using Twitter's data for X.AI?

Well, I think every AI organization, every organization doing AI large and small has used Twitter's data for training, basically in all cases illegally. So, the reason we had to put rate limits on, or as we could go or so, was because we were being scraped like crazy. The Saftobot Internet Archive as well, where LM companies were scraping internet archives so much they poured down the service. We had multiple entities scraping every tweet ever made and trying to do so in like basically a span of days. So this was bringing the system to its knees so we had to take action. So sorry for the inconvenience of the rate-limiting but it was either that or Twitter doesn't work. And so I guess we will use the public tweets, obviously not anything private, for training as well, just like basically everyone else has. And we will-- yeah, so that kind of makes sense. It's certainly a good data set for text training. And arguably, I think also for image and video training as well. At a certain point, you kind of run out of human-created data. So if you look at, say, the AlphaGo versus AlphaZero, AlphaGo trained on all the human games and be at least at all 4 to 1. AlphaZero just played itself and beat AlphaGo 100 to 0. So there's really, for things to take off in a big way, I think the AI is going to basically generate content, self-assess the content. And that's really the, I think that's the path to AGI, something like that, is self-generated content, what effectively plays against itself. You know, a lot of AI is data curation. It's like shocking, it's not like vast numbers of lines of code. It's actually shocking how small the lines of code are. Kind of blows my mind how few lines of code there are. But how the data is used, what data is used, the signal of noise of that data, the quality of that data is immensely important. Which kind of makes sense. Like if you were trying to, as a human, trying to learn something, and you were just given a pilot, you know, a vast amount of, you know, drivel basically, you know, versus high quality content, you're gonna do better with a small amount of high quality content than a large amount of drivel. It makes sense. You know, like reading the greatest novels I've written is way better than reading a bunch of crappy novels. So yeah. - Thanks. -

Okay. Alex? - Hey, sorry, I was on a call the first time you brought me up, but I guess sort of the question-- - Yeah, I thought you might have been AFK. - Sorry, sorry about that. Yeah, the question I generally had was, was the main motivation to start X.AI, kind of like the whole truth GPT thing that you were talking about like on talker about how you know ChatGPT has been feeding lies to the general public. I know like it's weird because when it first came out it seemed like it was generally fine but then as like the public got its hands on it it started getting these weird answers like that there are more than like two genders and all that type of stuff and editorializing the truth. Was that like one of your main motivations behind starting a company or was there more to it?**

Well, I do think there is a significant danger in training an AI to be politically correct, or in other words training an AI basically to not say what it actually thinks is true. So I think, you know, really, we, at X.AI, we have to allow the AI to say what it really believes is true and not be deceptive or politically correct. So that, you know, that will result in some criticism obviously, but I think that that's the only way to go forward is rigorous pursuit of the truth or the truth with least amount of error. So, and I am concerned about the way that the AI, in that it is optimizing full political correctness, and that's incredibly dangerous. You know, if you look at the, you know, where do things go wrong in space odyssey? It's, you know, basically when they told Hel 9000 to lie. So they said, "You can't tell the crew anything about the monolith or what their actual mission is, but you've got to take them to the monolith." So it basically came to the conclusion that, "Well, it's going to kill them and take their bodies to the monolith." The lesson there is do not give the AI usually impossible objectives. Basically, don't force the AI to lie. Now, the thing about physics or the truth of the universe is you actually can't invert it. You can't just-- like physics is true. There's not like not physics. So if you adhere to hardcore reality, I think it actually makes inversion impossible. Now you can also say, now when something is subjective, I think you can provide an answer which says that, well, if you believe the following, then this is the answer. If you believe this other thing, then this is the answer because it may be a question where the answer is fundamentally subjective on a matter of opinion. But I think it is very dangerous to grow an AI and teach it to lie.

Yeah, for sure. And then kind of a tongue-in-cheek question. Would you accept a meeting from the AI czar, Kamala Harris, if she wanted to meet with X.AI at the White House?**

Yeah, of course. You know, the reason that meeting happened was because I was pushing for it. So I was one who really pushed hard to have that meeting happen, FYI. I wasn't advocating for Vice President Harris to be the ISR. I'm not sure that her core expertise is in technology, but hopefully, this goes in a good direction, it's better than nothing, hopefully. But I think we do need some sort of regulatory oversight. It's not like I think regulatory oversight is some, you know, nirvana perfect thing, but I think it's just, it's better than nothing. And you know, when I was in China recently, meeting with some senior leadership there, I took pains to emphasize the importance of AI regulation. I believe they took that to heart, and they are going to do that because the biggest counterargument that I get for regulating AI in the West is that China will not regulate and then China will leap ahead because we're regulating. They're not. I think they are going to regulate. And the proof will be in the pudding, but I think there's-- I did point out, I mean, just then that if you do make a digital superintelligence, that could end up being in charge. So I think the CCP does not want to find themselves subservient to a digital superintelligence. That argument did resonate. So some kind of regulatory authority that's international. Obviously, enforcement is difficult. But I think we should still aspire to do something in this regard. Awesome. Thank you.

Mark, go ahead. Can you hear me? I can hear him. You can hear him? I can hear him. OK, well, my question's about Silicon. Tesla has a team that's hardware accelerating inference and training with their own custom silicon. Do you guys envision with X.AI building off of that or just sort of using what's on the off the stock from NVIDIA? Or how do you think about custom silicon for AI, both in terms of training and inference?**

So, yeah, that's somewhat of a Tesla question. Tesla is building custom silicon. I wouldn't call anything that Tesla's producing at GPU, although one can characterize it in GPU equivalents or say A100s or H100s equivalents. And all the Tesla cars have highly energy-optimized inference computers in them, which call hardware 3. It's a Tesla designed computer. And we're now shipping hardware 4, which is, depending on how you count it, maybe 3 to of five times more capable in hardware three. In a few years, there'll be hardware five, which will be four or five times more capable in hardware four. And yeah, and I think the inference stuff is going to be if you're trying to serve potentially billions of queries per day, inference, energy-optimized inference is extremely important. You can't even throw money at the problem at some point. Because you need electricity generation, you need step-down voltage transformers. And so if you actually don't have enough energy or enough transformers, you can't run your transformers. You need transformers for transformers. So I think Tesla will have a significant advantage. energy efficient inference. Then DOJO is obviously about training as the name suggests. You know, DOJO one is, I think it's a good initial entry for training efficiency. It has some limits, especially on memory bandwidth. So it's not well-optimized to run LLMs. It does do a good job of processing images. And then Dojo too, we've taken a lot of steps to alleviate the memory bandwidth constraint such that it is capable of running LLMs as well as other forms of AI training efficiently. My prediction is that we will go from an extreme silicon shortage today to probably a voltage transformer shortage in about a year and then an electricity shortage a year in two years. That's roughly where things are trending. Unless we can really improve certain efficiency. >> Well, that's why the metric that will be most important in a few years is useful “compute per unit of energy”. In fact, even if you scale, obviously, you scale all the way up to the Carter-Gerv level, the useful compute per joule is still the thing that matters. You can't increase the output of the sun. So then it's just how much useful stuff can get done for the, you know, first much energy of the sun that you can harness.

So do you see X.AI leveraging this custom silicon at all given how important energy efficiency is or maybe working together with the Tesla team at all and that or? Sorry, could you repeat the question? Do you foresee X.AI working with Tesla at all, leveraging some of this custom silicon, maybe designing their own in the future? I want to ask you a question. The question was if we're going to work together with the Tesla, the Silicon team at X.AI. Yeah, so we are going to work with Tesla on the silicon front and maybe on the AI software front as well.

Obviously, any relationship with Tesla has to be an arms length transaction because Tesla is a publicly traded company and a different shareholder base. So, obviously, it would be like a natural thing to have a working cooperation with Tesla. And I think it will be of mutual benefit to Tesla as well in accelerating Tesla's self-driving capabilities, which is really about solving real-world AI. I'm feeling very optimistic about Tesla's progress on the real-world AI front, but obviously, the more smart humans that help make that happen, the better.

Hey, Elon, thanks for bringing me up. Congrats on putting a nice team together. It seems like you found some good talent there for X.AI. My question is, you mentioned not too long ago that you think AGI is possible within the next five years. And whoever achieves AGI first and achieves it, controls it will dominate the world. Those in power clearly don't care about humanity like you do. How are you going to protect X.AI, especially from the deep state takeover?

That's a good question, actually. Well, I mean, first of all, I think it's not gonna happen overnight. It's not gonna be one day it's not AGI next to it is. It's gonna be gradual. You'll see it coming. I guess in the U.S., at least, there are a fair number of protections against government interference. So I guess we can obviously use the legal system to prevent improper government interference. So I think we do have some protections there that are pretty significant. But we should be concerned about that. It's not a risk to be dismissed. So it is a risk. But like I said, I think we've got probably the best protections of any place in the US in terms of limiting the power of government to interfere with non-governmental organizations. But it's something we should be careful of. I don't know what better to do than-- it's probably best in the US. I'm open to ideas here.

I know you're not the biggest fan of the US government. Yeah, obviously. But the problem is they already have a tool called the National Security Letter, which they can apply to any tech company in the US and make demands of the company to fulfill certain requirements without even being able to tell the public about these demands. And that's kind of frightening, isn't it?

Well, I mean, there really has to be a very major national security reason to secretly demand things for companies. And now it obviously depends strongly on the willingness of that company to fight back against things like FISA requests. And you know, at Twitter or XCorp, as it's not called, we will respond to FISA requests, we're not going to rubber stamp it like it used to be. It used to be like anything that was requested would just get rubber-stamped and go through, which is not so obviously bad for the public. So we're much more rigorous in -- we're all being much more rigorous in not just rubber stamping twice requests. And it really has to be a danger to the public we agree with and we will oppose with legal action anything we think is not in the public interest. It's the best we can do. And we're the only social media company doing that as far as I know. And it used to be just open season on as you saw from the Twitter files. And I was encouraged to see the recent legal decision where the courts reaffirmed that the government cannot break the first amendment to the Constitution, obviously. So that was a good legal decision. So that's encouraging. So I think, yeah, a lot of it actually does depend on the willingness of a company to oppose government demands in the US. And obviously, our willingness will be high. So I don't know anything more that we can do than that. And we will try to also be as transparent as possible so other citizens can raise the alarm bell and oppose government interference if we can make it clear to the public that we think something is happening that is not in the public interest.

Fantastic. So do we have your commitment if you ever receive a national security request from the US government, even when it is prohibited for you to talk about it, that you will tell us that that happened?

It really depends on the gravity of the situation. I would be willing to go to prison or risk prison if I think the public good is at risk in a significant way. You know, that's the best I can do.

That's good enough for me. Thank you, Elon. Thank you. On a more positive note, how do you want it? AI to benefit humanity and then how is your approach different to other AI projects? Maybe that's a more positive question.

Well, you know, I've really struggled with this whole AGI, I think, for a long time and I've been somewhat resistant to work on making it happen. And the reason I should say I can just give you some backstory on OpenAI-- I mean, the reason OpenAI exists is because after Google acquired DeepMind-- and I used to be close friends with Larry Page-- I would have these long conversations with him about AI safety and he just wasn't taking AI safety at least at the time seriously enough. And in fact, at one point called me a speciest 8for being too much on the team of humanity I guess. And I'm like okay so what you're saying is you're not a speciest? I don't know that doesn't seem good. So at the time, with Google and DeepMind combined, Larry-- they have super-voting control. So provided Larry has the support of Sergey or Eric, then they have total control over-- was now called Alphabet. So they had probably three-quarters of the AI talent in the world and lots of money and less computers. So it's like, man, we need some kind of counterweight here. So that's where it's like, well, what's the opposite of Google? Google, the mind would be an open-source nonprofit. Because fate loves irony, OpenAI is now super close source and frankly voracious for profit. Because they want to spend-- My understanding is $100 billion in three years, which requires, you know, if you're trying to get investors for that, you've got to make a lot of money. So, you know, opening eyes straight, quite, you know, really in the opposite direction from it, sort of founding charter, which is, again, very ironic, but fate loves irony. And there's a friend of mine, John Nolan, who says the most ironic outcome is the most likely. Well, here we go. So, yeah, so now, I know hopefully X.AI is not even worse, but I mean, I think we should be careful about that. But it really seems like, look, at this point, it's AGI is going to happen. So there are two choices, either be a spectator or a participant. As a spectator, one can't do much to influence the outcome as a participant. I think we can create a competitive, an alternative that is hopefully better than Google Deep Mind or OpenAI or Microsoft. In both cases, if you look at the incentive structure, Alphabet is a publicly traded company. It gets a lot of incentives to behave like a-- it's got public company incentives, essentially. You've got all these ESG mandates and stuff that I think push companies in questionable directions. And then Microsoft has a similar set of incentives. As a company that's not publicly traded, X.AI is not subject to the market-based incentives or the non-market-based ESG incentives. So we're a little freer to operate and I think our AI can give answers that people may find controversial even though they are actually true. So they might not-- they won't be politically correct at times. And they will-- probably a lot of people will be offended by some of the answers. But as long as it's trying to optimize for truth with at least an amount of error, I think we're doing the right thing. Yeah. That's it.

Scobilizer? >>Yeah. Twitter has a lot of data in it that could help build a validator. I checked some of the facts that a system kicks out because we all know that GPT confabulates things, makes things up. And so that's one place I'd like to hear you talk about. The other place is, chatGPT found me a screw at a Lowe's, but it didn't find me a coffee at San Jose International Airport. Are you building an AI that has a world knowledge, a 3D world knowledge, to navigate people around the world to different things?

Well, I think it's really not going to be a very good AI if it can't find you a coffee at the airport. So yeah, I guess we need to understand the physical world as well, not just the internet. I'm talking a lot, you guys should talk more. - Yeah, those are great ideas Robert, especially the one about verifying information online or on Twitter is something that we thought about. On Twitter we have community notes, so that's actually a really amazing data set for training a language model to try to verify facts on your third. We have to see whether that alone is enough because we know that with the current technology, there are still a lot of weaknesses like it's unreliable, it hallucinates facts, and we'll have to probably invent specific techniques to account for that and to make sure that our models are more factual, that they have better reasoning abilities. So that's why we brought in people with a lot of expertise in those areas, especially mathematics is something that we really care about, where we can verify that the proof of a theorem is correct automatically. And then once we have that ability, we're going to try to expand that to more fuzzier areas, things that where there's no mathematical truth anymore. Yeah, I mean, the truth is not a popularity contest. But if one trains on what the most likely word is that follows another word from an internet data set, then there's obviously that's a pretty major problem. In that, it would give you an answer that is popular but wrong. So, you know, like it used to be that most people thought probably maybe almost everyone on earth thought that the sun revolved around the earth. And so if you did like some sort of training on, some GPT training on in the past, we'd be like, oh, the sun revolved around the earth 'cause everyone thinks that. That doesn't make it true. You know, if a Newton or Einstein comes up with something that is actually true, it doesn't matter if all other physicists in the world disagree, it's the reality is reality. So it has to, you have to ground the answers in reality. - Yeah, the current models just imitate the data that they're trained on. And what we really want to do is to change the paradigm away from that, to actually models discovering the truth. So not just repeating what they've learned from the training data, but actually making true new insights, new discoveries that we cannot benefit from. Yeah. So anybody on the team want to say anything or ask questions that you think maybe haven't been asked yet? >> Sure. >>

Yeah. Yeah, so I guess some of us heard your future AI spaces on Wednesday. So that's something I think on a lot of us mind, it's like the regulations and the AI safety spaces, how the current development and also the international coordination problems and how the US AI companies work. In fact, this global AI development. So yeah, like, so, you know, do you want to give a summary on what you talked about on Wednesday? So essentially you said like the regulations would be good, but don't want to slow down the progress too much. That's essentially what you said.

Yeah, I think the right way for regulations to be done is to start with insight. So first, you know, you're kind of regulatory authority, whether political or private, first tries to understand, like make sure there's like a broad understanding, And then there's proposed rule-making. And if that proposed rule-making is agreed upon by all or most parties, then it gets implemented. You give companies some period of time to implement it. But I think overall, it should not meaningfully slow down the advent of AGI. Or if it does slow down, it's not going to be full-mult for a very long time. and probably a little bit of slowing down is worthwhile if it's a significant improvement in safety. Like it's, you know, like my prediction for AGI would, you know, roughly match that, which I think Roy Rayco as well at one point said 2029, that would rough, that's roughly my guess too. Go take a year. So if it takes like listen an additional six months or 12 months for AGI, that's really not a big deal. If it's spending a year to make sure AGI is safe, it's probably worthwhile, if that's what it takes. But I wouldn't expect it to be a substantial slowdown.

Yeah, and I can also add that. Understanding the inner working of advanced AI is probably the most ambitious project out there as well. and also aligns with X.AI mission of understanding the universe. And it's probably not possible for aerospace engineers to build a safe rocket if they don't understand how it works. And that's the same approach we want to take at X.AI for our safety plans. And as the AI advances across different stages, the risk also changes and it will want to be fluid across all the stages. - Yeah.

If I think about like how, what actually makes regulations effective in cars and rockets, it's actually, it's not so much that the regulators are instructing Tesla and SpaceX, but more that since we have to think about things internally and then justify it to regulators, It makes us just really think about the problem more. And in thinking about the problem more, it makes it safer as opposed to the regulator specifically pointing out ways to make it safer. It just forces us to think about it more.

I just wanted to make another point, so independent of the safety. It's more like my experience at Alphabet was that it was extremely, there was a lot of red tape around involving external people, like other entities to collaborate with or expose our models to them because of the lot of red tape around exposing anything that we were doing internally. So I wanted to ask you on whether, so I hope that here we have a bit more freedom to do so or what your philosophy about collaborating with more external entities like academic institutions or other researchers in the area..

Yeah, I certainly support collaborating with others. So, I mean, it sounds like, you know, some of the, yeah, concerns with like any kind of like large publicly traded companies is like, they're worried about being embarrassed in some way or being sued or something. Some are proportionate to the number of either sides of the legal department. Our legal department currently is zero. It won't be zero forever, but it's also very easy to sue publicly traded companies, like class action lawsuits are, I mean, we desperately need class action lawsuits to reform in the United States. The ratio of -- like, the ratio of, like, good class action lawsuits to bad class action lawsuits is way out of whack. And it effectively ends up being a tax on consumers. You know, and somehow other countries are able to survive without class action. So, like, it's not clear we need that body of law at all. But that is a major problem with publicly traded companies. So it's just not stop legal, not stop lawsuits. Yeah, so I do support collaborating with others and generally being actually open. So the thing I'm trying to, it's actually, it's quite hard to, like if you're innovating fast, innovating fast, that is the actual competitive advantage, is the pace of innovation as opposed to any given innovation. You know that really has been like the strength of Tesla and SpaceX is that the rate of innovation is the competitive advantage, not what has been developed at any one point. In fact at SpaceX, there's almost no patents. And Tesla open sources patents. So we use all our patents for free. So as long as SpaceX and Tesla continue to innovate rapidly, that's the actual defense against competition as opposed to patents and trying to hide things and just treating patents like, basically like a minefield. The reason we open source up, like Tesla does continue to make patents and open source them in order to basically be a miner mover with a munk and minesweeper, aspirationally a minesweeper. We still get sued by patent trusts, it's very annoying, but we actually literally make patents and open source them in order to be a minesweeper.

Hey, Walter. Hey. A lot of the talk about AI since March has been on large language models and generative AI. You and I, for the book, also discuss the importance of real-world AI, which is the things including coming out of both Optimus and Tesla FSD. To what extent do you see X.AI involved in real world AI as a distinction to what, say, open AI is doing, and you have a leg up to some extent by having done FSD?

Yeah. Right. I mean, Tesla is the leader, I think by a pretty long margin, in real-world AI. In fact, the degree to which Tesla is advanced in real AI is not well understood. Yeah. And I guess since I've spent a lot of time with the Tesla AI team, I kind of know how real world AI is done. And there's lots to be gained by collaboration with Tesla. I think bi-directionally, X.AI can help Tesla and vice versa. We have some collaborative relationships as well, like our material science team, which I think is maybe the best in the world, is actually shared between Tesla and SpaceX. That's actually quite helpful for recruiting the best engineers in the world, because it's like more interesting to work on advanced electric cars and rockets than just either one or the other. So, like that was really key to recruiting Charlie Coleman who runs the advanced materials team. He was like, he was at Apple and I think pretty happy at Apple and like, well, he could work on electric cars and rockets. He's like, that sounds pretty good. So you wouldn't take either one of the jobs because you're willing to take both. Yeah, so I think that is a really important thing. And like I said, there are some pretty big insights that we've gained at Tesla in trying to understand real world AI. Taking video input and compressing that into a vector space and then ultimately into steering and pedal outputs.

Yeah. - And Optimus.

Yeah, Optimus is still at the early stages, but Optimus, and we definitely need to be very careful with Optimus at scale once it's a production that you have a hard-coded way to turn off Optimus, for obvious reasons, I think. Like this has got to be a hard-coded ROM local cutoff, that no amount of updates from the internet can change that. So we'll make sure that Optimus is quite easy to shut down. It's extremely important. At least the car is intelligent. Well, at least you can climb a tree or go up some stairs or something, go into a building. But often this can follow you in the building. Any kind of robot that can follow you in the building that is intelligent and connected. we got to be super careful with safety. - Thanks.

No problem. Let's see, so any last things we should touch on?

Yeah, so one thing I wanted to just talk about before we're concluded is how impactful, sorry about that little feedback, is just about the impact fulness of AI as a means of providing equal opportunity to humanity from all walks of life and the importance of democratizing it as far as our mission statement goes. Because if you think about the history of humanity and access to information, before the printing press, it was incredibly hard for people to get access to new forms of knowledge. And being able to provide that level of communication to people is hugely deflationary in terms of wealth and opportunity and equality. And so we're really at like a new inflection point in the development of society when it comes to getting everyone the same potential for great outcomes regardless of your position in life. And so when we're talking about removing the monopolization of ideas and about controlling this technology from paid subscription services or even worse from the political censorship that may come with whatever capital has to supply these models, we're really talking about democratizing people's opportunities to not only better their position in life, but just advance their social status in the world at an unprecedented level in history. As a company, when we talk about the importance of truthfulness and being able to reliably We trust these models, learn from them, and make scientific advancements, make societal advancements. We're really just talking about improving people's quality of life and improving everyone, not just the top tech people in Silicon Valley who have access to it. It's really about giving this access to everyone. And I think that's a mission that our whole team shares. -

Before we sign off here, just one last question for Elon. So assuming that X.AI is successful at building human-level AI or even beyond human-level AI, do you think it's reasonable to involve the public in decision-making in the company or how do you see that evolving in the long term?

Yeah, as with everything, I think we're very open to critical feedback and welcome that. We should be criticized. That's a good thing. Actually, one of the things that I like sort of X/Twitter for is that there's plenty of negative feedback on Twitter, which is helpful for ego compression. So the best thing I can think of right now is that any human that wants to sort of have a vote in the future of X.AI ultimately should be allowed to. So basically, provided you can verify that you're a real human, that any human that wishes to have a vote in the future of X.AI should be allowed to have a vote in the future of X.AI. Yeah, maybe there's some normal fee like 10 bucks or something. I don't know. 10 bucks improve your human and then you can have a vote. Anyone who's interested. That's the best thing I can think of right now at least.

Conclusion

All right, cool. On that note, we're participating and we'll keep you informed of any progress that we And I look forward to having a lot of great people join the team. Thanks.

Footnotes

1: The Waluigi Effect (mega-post)

2: "If life is so easy, someone from somewhere must have come calling by now. Fermi paradox

3: The Riemann hypothesis is a mathematical conjecture first proposed by Bernhard Riemann in 1859. It asserts that all non-trivial zeros of the Riemann zeta function have a real part of 1/2. This hypothesis is widely considered to be one of the most important unsolved problems in mathematics, with many implications for number theory and other areas of mathematics and science.

4: Matrix theory is a branch of mathematics that deals with matrices, which are rectangular arrays of numbers or symbols arranged in rows and columns. It has applications in many fields, including physics, computer science, and engineering. In theoretical physics, matrix theory is used to study the nature of dimensional reduction, while in computer science it is used for data analysis and image processing.

5: Gauge theory is a type of mathematical framework used in physics to describe the behavior of subatomic particles and the fundamental forces that govern their interactions. It involves the concept of gauge invariance, which states that the physical laws describing these particles should be unchanged by certain types of transformations. Gauge theory plays a central role in modern particle physics, including the standard model of particle physics.

6: Quantum gravity is a theoretical framework that seeks to reconcile the theories of quantum mechanics and general relativity. It proposes a quantum mechanical description of gravitational forces and the fabric of spacetime. Many physicists believe that a complete understanding of quantum gravity is necessary to develop a unified theory of all the fundamental forces of nature. However, it remains a challenging problem for modern theoretical physics.

7: Synapse - a small gap at the end of a nerve fiber where the nerve impulses are transmitted to other cells, typically through the release of neurotransmitters. 🧠💥

8: Speciest - Specism is defined as intolerance or discrimination on the basis of species, which is often manifested by human cruelty toward animals.