Build the research engineering platform a Type I civilization relies on.

Up top, you are seeing three live, geometry aware panels running in a shared shader pipeline:

  • QED embedding: the base curvature slice.
  • Eigenmode fingerprint: Laplace–Beltrami bands draped on a warped manifold, blue to orange curvature, glowing cyan contours, rim lit stripes that form the basis the model thinks in.
  • Delta PINNs reconstruction: a flat grid with a mildly distorted field wiped against the same field on a curved surface, shared colormap, sensor dots along the boundary, and an animated wipe that shows how geometry aware PINNs close the mismatch.

It updates in real time and is meant to be pushed, broken, and rebuilt. Ask yourself how would I build that better?

You might work on

If you join MMI, you will work on agents that reason over plasma-class systems and expand the operational consciousness of enterprises. Example directions:

  • Agents for plasma-class systems Build agents that interpret fusion, EM, or climate fields, infer structure, and propose interventions. Replace toy metrics with GR + EM operators and expose them through a clean WebGPU interface.
  • Visuals as agent-addressable manifolds Treat every visualization as an observation manifold. Let agents trace geodesics, measure curvature, detect regime shifts, and request new slices instead of hand-crafted dashboards.
  • Closing the plasma–organization loop Apply the same stability principles used in plasma control to human organizations. Build agents that detect attention sinks, coordination turbulence, and information shear in real enterprises.
  • Automated scientific reasoning Design agents that propose experiments, tune PDE parameters, discover symmetries, and validate reduced models against real data. Target domains include fusion, climate, and biosensing.
  • Expanding enterprise phase space Model enterprises as dynamical systems with controllable attractors. Build tools and agents that map this phase space, surface unstable trajectories, and recommend actions to increase the system’s controllable variety.

You will not be handed a fixed spec. You will design the ontology, the interfaces, and the safety constraints, then ship versions and measure how much intelligence and control you actually add to the world.

Who this is for

Likely a fit if:

  • You have read actual QED/QFT or GR texts and can distinguish canonical results from speculation.
  • You can switch coordinate systems mentally and know what transforms and what does not. You are comfortable with metrics, embeddings, flows, and practical approximations.
  • You have built nontrivial visual systems (custom GLSL/WebGPU shaders, scientific visualization kernels, simulation dashboards, or similar).
  • You can go from a concept sketch to a running prototype quickly and treat wrong-but-illuminating artifacts as part of the process.
  • You value modular abstractions and reproducibility more than polishing one perfect demo.
  • You are motivated by agentic engineering and using models to compress search, test hypotheses, and extend your own bandwidth.
  • You treat intelligence and modeling as finite-precision dynamical systems and can reason about stability, safety, and capacity in that frame.

Not a fit if:

  • You cannot tolerate incomplete or slightly incorrect visualizations as scaffolds.
  • You prefer fixed, linear ticket execution instead of open-ended problem solving.
  • You are uninterested in physics, field theories, simulation fidelity, or control theoretic reasoning.
  • You avoid debugging across layers such as geometry, numerics, rendering, optimization, and ML agents.

We are hiring Agent Engineers.

Over the next cycles we expect to staff the equivalent of 3 to 12 full-time agent engineers with overlapping skill sets:

Plasma Physics Researcher
Research Engineer (AI/ML/Physics)
Institutional Research Sales Engineer
Cloud Infrastructure Engineer
Geospatial Engineer

Titles are flexible. The core requirement is that you and your agents can span at least three of these archetypes and operate as a unified human+agent engineering loop.

Location and work mode

  • Base: Hybrid in Reston, VA
  • Remote: Considered for exceptional candidates with strong prior signal and the ability to work with minimal supervision and high bandwidth async communication

Short stints in person are encouraged even for remote hires. The work is faster when we can stand in front of the same screen and argue about field lines.

How to apply

What world model will you build to understand and extend this page. Show us how you think, how you reason about geometry, and how you work with automated agents inside a controlled development loop.

  1. Submit your resume and links.

    GitHub, personal site, or any prior visual, simulation, or physics work is useful.

  2. Automated screen.

    You will receive a short coding challenge centered on extending or improving the shader based Earth–Moon–electron demo. The focus is clarity and iteration. You will define a small interface, make a focused change, and include a quick self-check or invariant that keeps the result stable.

  3. Agentic workflow check.

    The challenge includes one step where you specify how a Planner, Implementer, and Reviewer agent would operate on your change. We want to understand how you design boundaries, supervise automated contributors, and verify correctness.

  4. Deep technical conversation.

    If the challenge goes well, we will explore your physics intuitions, your engineering habits, your approach to world models, and how you coordinate with agents in a high-velocity research loop.

After clicking, send a LinkedIn message with a short intro and 2–3 links (GitHub, portfolio, or 1–2 builds).

Apply

After clicking, send a LinkedIn message with a short intro and 2–3 links (GitHub, portfolio, or 1–2 builds).

This link routes to Linkedin.