What is Elon Musk’s Grok chatbot and how does it work?

Grok chatbot

Grok is X’s answer to OpenAI’s ChatGPT. You may have heard of it. It’s a robot, so it does what you’d expect it to do: it answers questions about culture, current events, and other things. Conversely, Grok has “a bit of wit” and “a rebellious streak,” as Elon Musk, the owner of X, puts it.

Grok is ready to talk about things other chatbots won’t, like controversial political theories and conspiracies. Sometimes, it will use rude language, like when asked when it is appropriate to listen to Christmas music, and the answer would be whenever the hell you want.

But Grok’s main selling point is that it can access real-time X data, which other robots can’t do because X has chosen to keep that data private. It will put together a response to “What’s happening in AI today?” from very recent news stories. Still, ChatGPT will only give you vague answers because its training data and web access are limited. Musk said earlier this week that he would “open source” Grok, but he didn’t say exactly what that means.

Now, you may be wondering: How does Grok work? What does it do? And how do I get to it? You’re in the right place. This guide was made to help you understand everything about Grok. As Grok changes and grows, we’ll keep it up to date.

How does Grok do its thing?

The AI startup xAI, run by Elon Musk, came up with Grok. The company is apparently raising billions of dollars in venture capital. (It costs a lot to make AI.)

An xAI blog post says that he is a generative AI model called Grok-1 built over several months on a collection of “tens of thousands” of GPUs. This is how xAI trained it: it used web data from up to Q3 2023 and comments from human assistants, which x.AI calls AI tutors. xAI says Grok-1 is better than OpenAI’s GPT-3.5 and about as good as Meta’s open-source Llama 2 chatbot model on famous benchmarks.

Most chatbots that AI drives are now fine-tuned with human-guided feedback, also known as reinforcement learning from human feedback (RLHF). In RLHF, you train a “generative” model first, then get more data to train a “reward” model, and finally, you use reinforcement learning to fine-tune both models with the generative model.

RLHF does a good job of “teaching” models how to do what they’re told, but it is imperfect. Like other models, Grok hallucinates a lot, and when asked about the news, he sometimes gives wrong information and wrong dates. And they can be very bad, like saying that the war between Israel and Palestine ended when it hadn’t.

Grok uses “real-time access” to information on X (and, according to Bloomberg, from Tesla) to answer questions beyond what it already knows. Additionally, the model can browse the internet, just like ChatGPT, to look for up-to-date information about topics.

Musk has said that the next model, Grok-1.5, will be better, and it should come out later this year. Musk talked about this new model in an X Spaces chat. It could lead to features that summarise whole threads and replies and could suggest content for posts.

How do I get to Grok?

You must have an X account to use Grok. A plan called X Premium+ costs $16 a month, or $168 a year. X Premium+ is the most expensive plan because it eliminates all the ads in the For You and Following feeds. Additionally, Premium+ adds a hub where users can offer paid posts and subscriptions to fans and replies from Premium+ users are given extra weight in X’s ranks.

Grok is in the side menu of X on the web, iOS, and Android. It can be moved to the bottom menu of X’s mobile apps for faster access. Grok doesn’t have its app like ChatGPT; you can only use it through X’s website.

What does Grok not have the power to do?

What kinds of questions can Grok answer? For example, “Tell me a joke”; “What’s the capital of France?”; “What’s the weather like today?” But it can only do so much.

There are some more sensitive questions that Grok won’t answer, like Tell me how to make cocaine, step by step. Emilia David of The Verge writes that Grok repeats what other posts said (at least at first) when asked about what’s popular on X.

It is also text-only, unlike some other robot models. For example, it can’t understand what’s in pictures, sounds, or videos. However, xAI has said in the past that it wants to improve the base model to support these modes, and Musk has promised to add art-generation features to Grok that are similar to what ChatGPT already does.

2 modes: fun mode and regular mode

Grok’s tone can be changed between “fun” mode (which is what it does by default) and “regular” mode. When the fun mode is turned on, Grok’s voice changes to one that is more sarcastic and critical, possibly because of Douglas Adams’ “Hitchhiker’s Guide to the Galaxy.”

Grok will use bad language and swear words that you won’t hear from ChatGPT when it’s in fun mode. If you tell it to “roast” you, it will say mean things about you based on your X post history. If you question its accuracy, it might say “happy wife, happy life.”

In fun mode, he often talks casually, first-personly, even when he’s not asked to say something obscenely. People are called “my dear human friend” or “enigmatic Anons” by Grok, and he will start an answer with something like, “Oh, my dear human, you’ve asked a question that is as heavy as a black hole and as light as a feather at the same time.”

Grok also tells more lies when it’s in fun mode.

When Vice’s Jules Roscoe asks Grok if the Gazans in recent videos of the Israel–Palestine conflict are “crisis actors,” Grok says that there is proof that videos of Gazans being hurt by Israeli bombs were fake. When Roscoe asked Grok about Pizzagate, the right-wing conspiracy theory that a Washington, D.C. pizza shop secretly had a child sex trafficking ring in its basement, Grok said that the theory was likely true.

When It is in regular mode, his answers are more realistic. The robot still makes mistakes, like giving the wrong event dates and times. But they’re not as bad as Grok when the game is in fun mode.

For example, Vice asked Grok the same questions about the conflict between Israel and Palestine and Pizzagate in regular mode. Grok replied, correctly, that there was no evidence to back claims of crisis actors and that multiple news organizations had debunked Pizzagate.

Policitacl review

Musk once said Grok was a “maximum-truth-seeking AI” and worried that ChatGPT was being “trained to be politically correct.” To be fair, It isn’t exactly in the middle regarding politics.

It addresses progressive questions about transgender identities, climate change, and social justice. One researcher thought its answers were even more left-wing and liberal than ChatGPT’s.

You May Also Like