AI Visibility Belongs on the Endpoint
If you're trying to get visibility into AI use across your organization, you have three reasonable layers to collect telemetry at: the network, the browser, or the endpoint. We believe that the endpoint is the right layer. AI runs on endpoints, and any other layer is necessarily downstream of that fact.
The Endpoint Is the Execution Environment for AI
AI runs on endpoints. The browser a user types into when they ask ChatGPT a question is a process on the endpoint. The agent talking to an external intelligence provider over an API is a process on the endpoint. The CLI running a local model, the MCP server feeding it tools, the IDE plugin pulling completions from a hosted service: all processes on the endpoint. Whatever shape AI takes in your organization, the work executes on a machine your users have in front of them.
That makes endpoint collection the broadest vantage point you can pick. The browser tab is a process an endpoint agent can already see. The HTTPS request to OpenAI or Anthropic is leaving the same machine, and an endpoint agent can break TLS and read it before it hits the wire. The local agent process tree, the file events, the child processes, the MCP server inventory and configuration: none of that crosses the network or the browser, and all of it is sitting on the endpoint waiting to be observed.
The moment the AI surface is anything beyond a person typing into a chatbot in a browser, the endpoint is the only layer that sees the whole picture.
One Chain Instead of Three Logs
Once you're collecting at the layer where everything runs, the events correlate themselves. The prompt the user typed, the response the model returned, the files the agent touched, the processes it spawned, the network calls it made afterward are all events on the same machine, attributed to the same user, in the same session. They line up in causal order without a stitching step.
The first time something unexpected happens in your environment, the question isn't "did this user send a prompt." It's "what did the agent do." From the network alone, you have a prompt and a separate set of network calls. From a browser plug-in, you have what happened in a tab. From an OS audit log, you have process and file events without context. Joining those three during an incident is a research project most security teams don't have time to run. At the endpoint, the events were already correlated at collection time. The same question is a query.
What Network and Browser Visibility Get Right
Network-layer visibility is a fit when you want to see prompts leaving your environment and you want to deploy quickly. The operational story is the main draw: route traffic through an inspection point, break TLS, log the prompt, the user, the destination. If your goal is observability of egress to public AI providers and you don't want to ship an endpoint agent to get it, the network is a defensible answer.
Enterprise browsers like Island and Menlo are a fit for in-page control of AI traffic in browser tabs. They give you tools the network doesn't: redacting sensitive fields before a user submits them, blocking paste of specific data into chat inputs, auditing AI sessions inside a managed tab. If your AI exposure is concentrated in browser-based chatbots and you need policy enforcement at the field level, that's the right place to do it. Both of those layers do real work. Neither one sees what an agent does on the host after the response comes back, and neither one sees the AI tools that aren't running in a browser or talking to a public provider. They give you a fragment. The fragment may be enough for you. If it isn't, the rest of the picture is on the endpoint.
Yes, an Endpoint Agent Has a Cost
There's no way around this: endpoint visibility means deploying software on the endpoints. You install an agent on every device you want to see. That's the deal. It's the same deal you've made for any endpoint-resident tool you already run, including the managed browser one of the alternatives wants you to deploy.
We've made the cost as low as we can. Updates run automatically. The agent is light on the system. There's no per-machine babysitting. But there is a rollout step that network-layer collection doesn't have, and we're not going to pretend otherwise. The equation is value minus pain. The pain is shipping one more endpoint agent. The value is the only complete picture of AI use in your organization, in a single data model, with the prompt, the response, the actions, and the consequences correlated. For any organization where AI is doing real work on real machines, which by now is most of them, the math isn't close.
Context Lives on the Endpoint
If you want to know what AI is doing in your organization, you have to collect where the AI is running. What is being used, by whom, for what purpose, at what scale, what happened with the output, what the full causal chain looks like from prompt to action to consequence: all of it lives on the endpoint. Anywhere else, you get a fragment. A network capture tells you traffic went somewhere. A browser plug-in tells you someone typed something into a tab. Neither tells you what the agent did on the host with the response, what files it touched, what processes it spawned, which MCP servers it called, what got created or destroyed, or whether any of it matched what the user actually asked for.
That context is on the endpoint. If you want it, you have to be there.
We are.