Microsoft is pushing enterprise agents beyond text. In a Microsoft 365 Developer Blog post published on April 7, 2026, the company said MCP Apps and the OpenAI Apps SDK can now bring rich, app-powered interfaces directly into Microsoft 365 Copilot chat. That matters because it changes the job of an enterprise agent. Instead of only answering questions, an agent can now show a form, render a dashboard, collect approvals, surface records, and guide a user through real work without forcing them to leave the conversation.
For teams building AI agents, this is a meaningful product shift. The infrastructure story around agents has advanced quickly over the past year, but one gap has remained stubbornly practical: the user experience. If an agent can reason, call tools, and pull context but still dumps everything back into plain text, complex workflows become awkward fast. Microsoft’s MCP Apps move is an attempt to fix that inside the daily interface many enterprise users already inhabit.
What Microsoft launched on April 7, 2026
Microsoft said agents in Microsoft 365 Copilot can now deliver interactive UI experiences through either MCP Apps or the OpenAI Apps SDK. These experiences are rendered as secure, sandboxed interfaces inside Copilot chat. Instead of a wall of text, users can see tables, forms, diagrams, dashboards, maps, rich media, and other app surfaces directly in the chat flow.
The company framed the release as a move from conversational agents to interactive ones. That is the right lens. A conversational agent can explain a process. An interactive agent can help complete it. In Microsoft’s examples, partners are using this model for expense workflows, data entry, visualizations, project management, design tasks, and learning experiences.
Microsoft also tied the launch to Work IQ, its layer for grounding Copilot in meetings, emails, and organizational activity. In practice, that means an agent is not just displaying UI widgets for decoration. It can combine enterprise context with an interactive action surface, such as matching receipts from email, updating records after a meeting, or presenting next steps alongside relevant work data.
Why this matters for enterprise AI teams
The biggest takeaway is that agent UX is becoming a platform layer. Many businesses have spent the last year evaluating model quality, tool use, orchestration, and governance. Those matter, but adoption often rises or falls on a simpler question: can employees actually use the agent inside the flow of work?
MCP Apps make that answer stronger. If a sales agent can show a structured account view inside Copilot, if a finance agent can present a reconciliation form in context, or if a service agent can display dispatch maps and approval options inline, the agent becomes easier to trust and easier to operationalize. It is no longer asking the user to mentally translate a text response into action.
This also matters because Microsoft is converging multiple standards and ecosystems at once. The company explicitly supports both MCP Apps and the OpenAI Apps SDK in Copilot chat. That lowers the odds that teams have to bet on a single proprietary presentation model. For enterprises already exploring MCP as the tool layer, this makes the protocol more strategically relevant. It starts to look less like a back-end plumbing standard and more like part of the full agent experience stack.
How the model works in practice
Microsoft describes two main interface modes. Inline mode is required and is meant for lightweight widgets directly in the conversation. Side-by-side mode is optional and is designed for richer workflows that need more room, such as multistep editing, review, comparison, or other immersive tasks.
That distinction is useful for builders. Not every agent needs a full application surface. Some only need a compact approval card, a quick picker, or a summary table. Others need a larger operational canvas. The important point is that Copilot is now being positioned as both the conversational layer and the container for action-oriented interfaces.
Microsoft also emphasized that the platform supports OAuth 2.1, Microsoft Entra single sign-on, and anonymous authentication patterns. Agents with interactive app experiences inherit the same governance and administrative controls used for other declarative agents. That detail matters for enterprise rollout. The product story is not just about nicer UI; it is about giving teams a governed way to bring app behavior into Copilot.
What businesses should do next
If your company is experimenting with AI agents, this launch is a cue to revisit which workflows belong in chat and which need structured interfaces. Many internal agents fail not because the model is weak, but because the task actually needs a table, confirmation step, visual comparison, or embedded action panel. MCP Apps give teams a more credible way to support those patterns inside Microsoft 365.
A good near-term test is to pick a workflow with clear context and clear actions: expense handling, CRM updates, approval routing, knowledge workflows, or employee service requests. If the task depends on both organizational context and structured follow-through, it is a strong candidate.
More broadly, Microsoft’s move reinforces where enterprise AI is heading. The winning agents are unlikely to be pure chatbots. They will be systems that combine context, tools, governance, and interface design into one operational layer. MCP Apps in Copilot chat are a notable step in that direction.
Planning enterprise agents that do more than chat? Nerova helps businesses generate AI agents and AI teams designed for real workflows, grounded context, and production deployment.