Connect with us

Google tests Remy AI agent for Gemini as focus turns to user control

Google is quietly testing a new AI personal agent called Remy inside a staff only version of its Gemini app. The project, first reported by Business Insider, is part of a broader push to turn Gemini from a chatty assistant into something that actually does things on your behalf. Think of it less like a glorified search bar with a personality and more like a digital employee that learns your habits, monitors what matters to you, and takes action without you having to chase it.

The internal document, which Business Insider reviewed, describes Remy as a “24/7 personal agent.” That phrasing alone signals a shift in ambition. Until now, Gemini has largely been a reactive tool: you ask, it answers. But Remy flips that script. It is being designed to integrate across Google services and handle complex, multi step tasks while learning your preferences over time. That includes everything from managing your calendar to controlling smart home devices or even pulling data from third party services like Spotify and WhatsApp.

What Remy does differently from other AI agents

So what makes Remy stand out from the agent like features Google already offers? The company quietly introduced something called Agent Mode earlier, but its capabilities vary depending on your subscription tier and region. Remy, by contrast, is described as more advanced. It is not just following commands. It monitors things that are most relevant to you and takes initiative based on what it learns.

Imagine an assistant that notices you always block out Friday afternoons for deep work and automatically reschedules meetings that conflict. Or one that sees a notification from a delivery service, checks your calendar, and rearranges your lunch break so you are home when the package arrives. That is the kind of autonomy Google is aiming for. But with that autonomy comes a serious question: how much control are you willing to hand over?

Connected apps and the scope of Gemini’s reach

Google’s Gemini support documentation already outlines the current reach of its connected services. These include Workspace staples like Gmail, Calendar, Docs, Drive, Keep, and Tasks. But the list goes further: GitHub, Spotify, YouTube Music, Google Photos, WhatsApp, Google Home, and various Android utilities. Remy is expected to tap into many of these, which means it could theoretically read your emails, check your code repos, queue up your playlists, and turn down your thermostat all in one fluid workflow.

That level of integration is powerful. It is also a privacy tightrope. Google is well aware of this. The company’s Gemini Privacy Hub now provides context around how the agent interacts with connected apps, both Google’s own and third party services. Users can review and delete Gemini Apps Activity, adjust auto delete settings, and decide whether their data is used to improve Google AI. You can also manage which apps and data the agent can access, as well as any specific information you have asked Gemini to save.

Control and governance: the invisible backbone of Remy

Google’s existing documentation already covers actions with varying levels of user impact. Some are low stakes, like retrieving information from Workspace apps. Others are more significant, like creating calendar events, sending messages, opening apps, or controlling devices and smart home functions. Remy pushes further into what the company calls “agentic” territory, where the AI acts on your behalf rather than just responding to direct requests.

But here is the tricky part. Google Research has published guidance stating that AI agents should have well defined human controllers, carefully limited powers, observable actions, and the ability to plan. Google Cloud has echoed that sentiment, emphasizing that agent activities must be transparent and auditable through logging and clear action characterization. The guidance also stresses the least privilege principle: limit the agent’s powers to only what is necessary for its intended purpose, based on user risk tolerance.

That sounds good on paper. But the report did not reveal technical details about Remy’s architecture, the model version behind it, or the level of autonomy currently being tested. It also did not say whether Remy can act independently without user confirmation. Those are not small omissions. They get to the heart of how Remy handles approvals and logs completed actions. If the agent is supposed to be a 24/7 personal assistant, does it wait for a thumbs up before sending that email? Or does it just go ahead and learn from your reaction later?

Dogfooding and the OpenClaw comparison

The internal document reportedly describes Remy as a dogfooding project. For the uninitiated, that is tech speak for employees testing their own product before any broader release. It is a common practice at companies like Google, but it also means we have no public timeline or even a guarantee that Remy will ever ship. Business Insider did not identify which Google services are part of the current employee test either.

The report drew a comparison to OpenClaw, an AI agent that made headlines earlier this year for its ability to autonomously reply to messages, conduct research, and take actions on behalf of users. OpenAI CEO Sam Altman reportedly hired OpenClaw’s creator in February. Google DeepMind CEO Demis Hassabis has previously talked about the goal of building a truly capable digital assistant, but the company has not confirmed whether Remy will become a public Gemini feature. So for now, it remains a fascinating experiment behind closed doors.

Memory, learning, and the uneasy dance with personalization

One of Remy’s most intriguing reported features is its preference learning function. That means it does not just follow fixed rules. It adapts based on your behavior over time. This brings memory controls into sharp focus. Google’s Privacy Hub already lets users manage information they have asked Gemini to save, and it covers controls for personalization based on past chats and Personal Intelligence. But when an agent is actively learning your habits, the line between helpful and creepy gets very thin.

Think of it like a new roommate who watches what you do and starts preemptively making your coffee before you ask. Great if they get it right. Awkward if they grind decaf when you needed the full caffeine jolt. The stakes are higher when the actions involve sending emails, managing your calendar, or controlling your home security. Trust is not granted. It is earned over time, ideally with clear oversight and the ability to revoke access instantly.

Google’s approach to governance suggests they understand this. But the unanswered questions about Remy’s autonomy and approval workflow mean we are still guessing at how much leash the agent will have. For now, the dogfooding phase gives Google employees a chance to find the sharp edges before anyone else does.

More in AI