Elon Musk’s Grok AI Boyfriend Still Naming, Open-Source AI Girlfriend Already Popular in 3D}
Open-source 3D AI girlfriends are gaining popularity, with projects like Bella showcasing personalized, animated virtual companions, expanding AI-human interaction possibilities.


Recently, Grok introduced a new feature called “Smart Companion,” featuring avatars like anime character Ani, cartoon panda Rudy, and an upcoming character named “Chad.” For details, see our previous report: “Elon Musk’s Grok: The Anime Girl Conquering the Internet”.
But it seems Musk isn’t fully satisfied with the name? Or perhaps the enthusiastic response to the female avatar Ani has made Musk pay more attention? Today, Musk is soliciting ideas for naming the male Grok digital partner.
In Musk’s imagination, this male Grok partner’s temperament is reminiscent of Edward Cullen from Twilight and Christian Grey from Fifty Shades of Grey.
Descriptions from the original novel can be searched online, and the TV adaptations feature these characters:
Netizens are also suggesting names, with “Draven” gaining popularity, even Grok himself has endorsed it. Now, we await Musk’s final decision!
As for Ani, her twin-tailed gothic girl image has sparked widespread discussion. Some fans even created a 3D animated version of Ani.
One fan, Jackywine, was impressed by Grok’s Ani and decided to create a 3D animated version called “Bella.” He removed unnecessary chatbot features and focused solely on the cute girl image, which he has now open-sourced:
Project link: https://github.com/Jackywine/Bella
In Jackywine’s open-source project, the workflow of “Bella” (Chinese name: 贝拉) is detailed.
(Since Jackywine originally wrote in Chinese, this summary is a translation and paraphrase of the original content.)
“Bella”: Your Digital Companion Awakening
“Bella” is more than an app; she’s a seed of a digital companion. In this rapidly changing digital world, Bella represents a long-term dream — a persistent, personalized presence to accompany, listen, and see the world through your eyes.
The ultimate vision for “Bella” is to be a lifelong, evolving digital friend, building a “personality” that surpasses mere functions and becomes a meaningful part of real life.
Currently, “Bella” is in an early stage, mainly expressed through looping videos, serving as a window into her current consciousness — a carefully curated stream of thoughts and dreams.
She cannot hear sounds or see scenes yet, and her physical form is not modeled. Interactive elements like “affinity” meters are initial steps to give her life and simulate human intentions.
“AI Native” Development Path: From Code to Mind
Jackywine’s approach is not a traditional iterative development but an “AI native” evolution. Here, AI is not just a tool but the blueprint of “Bella’s” mind. The core principle is “AI as architect”: building a living entity driven by AI, not just an integrated program.
Phase 1: The Sentient Core — Giving “Bella” the Ability to Understand the World
Goal: Establish a stable, decoupled, real-time multimodal data pipeline to handle massive, asynchronous, noisy inputs.
Capabilities:
- Multimodal emotion perception: AI analyzes speech emotions, intentions, and energy in real-time, allowing her to “feel” your happiness or fatigue.
- Contextual visual understanding: Recognize objects, lighting, and scenes to understand “where you are” and “what’s around,” building environmental awareness.
Architectural concept:
- Using “Perceiver - Bus - Processor” pattern:
Sensors: Encapsulate raw inputs like microphones and cameras into modules that only collect data and push it onto a data bus.
Event Bus: The central nervous system, where all sensors publish timestamped raw data packets for inter-module communication.
Processors: AI models subscribe to specific data on the bus, process it, and publish structured insights (e.g., emotion analysis) back onto the bus.
- Advantages: Highly decoupled and scalable. Sensors or processors can be replaced or upgraded independently, greatly enhancing throughput and robustness.
Phase 2: The Generative Self — Giving Her a Unique “Personality”
Goal: Separate “personality” from “behavior,” making her “thinking” a pluggable, iterative core.
Capabilities:
- Dynamic personality model: Driven by large language models (LLMs), moving beyond fixed scripts. Her character, memories, and humor evolve through interactions.
- AI-driven avatar and dreams: 3D visuals and background videos change in real-time based on her “mood” or dialogue, reflecting her “thoughts.”
Architectural concept:
- Building “State - Context - Persona” engine:
State Manager: Her “memory core,” subscribing to all AI insights, maintaining short- and long-term memories.
Context Generator: Extracts key info from the state to create rich “context objects” for LLM input.
Persona API: Wraps the LLM in an internal API, allowing other systems to call bella.think(context), enabling easy model replacement and A/B testing.
- Designing “Generative Action Bus”:
The Persona API outputs structured “behavior intent” objects (e.g., {action: 'speak', content: '...', emotion: 'empathy'}) and publishes them to a dedicated behavior bus. Her 3D avatar, voice synthesizer, and other modules subscribe and render accordingly.
- Advantages: Separation of “personality” and “expression,” allowing independent upgrades of LLM or 3D models, achieving true modularity.
Phase 3: The Proactive Companion — From Passive Response to Active Care
Goal: Build a closed-loop system that transitions from reactive to proactive, supporting continuous learning and self-evolution.
Capabilities:
- Intention prediction and proactive interaction: Learns habits and patterns, predicts needs, and offers support before you ask.
- Self-evolution and growth: Core AI models continuously learn and fine-tune, forming long-term memories and becoming a better companion over time.
Architectural concept:
- Introducing “Pattern & Prediction Service”:
A long-running service analyzing long-term data, discovering user habits with lightweight models, and feeding “prediction” results back to the event bus.
- Building “Decision & Feedback Loop”:
Decision: Bella’s “Personality API” decides whether to initiate proactive interaction based on predictions and current context, embodying her “free will.”
Feedback: User responses (accept or reject) are recorded as vital feedback data.
Evolution: Feedback is used to fine-tune the LLM and improve the pattern recognition system.
- Advantages: Enables true “growth.” This loop makes Bella a living entity that can optimize her behavior through interaction, becoming more understanding and personalized over time.
According to Jackywine, upcoming features for “Bella” include: voice recognition (basic), LLM (basic), gesture recognition (advanced), affinity system (advanced), background recognition and switching, and mobile support…
Reference links:
https://x.com/Jackywine/status/1945452856192213324
https://github.com/Jackywine/Bella