Hands-on research experience in autonomy, AI, and national security innovation.
The Research Institute for Tactical Autonomy (RITA) Summer Research Internship is an eight-week paid research experience designed for undergraduate students interested in STEM, national security innovation, and hands‑on research. Interns are paired with RITA scientists and engineers to work on real projects in tactical autonomy, robotics, AI/ML and sensing. The internship includes student housing, paid travel, and a competitive stipend.
Work on mission-critical autonomy and AI research supporting national defense priorities.
Receive mentorship from leading scientists working on autonomy and AI/ML across various domains from submarine to satellite
Join a team of world class research scientists and engineers to collaborate on high-value tactical autonomy solutions
February 2026: Applications open
March 15, 2026: Application deadline
March 2026: Application review and interviews
April 2026: Final selections and onboarding logistics
June 2026: Program start
Transcript (PDF)
Must include full name, institution, major, GPA, and course list.
Resume (PDF, max. 2 pages)
Must include educational background, research/work experience, technical skills, and accomplishments.
Cover Letter (PDF, max. 2 pages)
Should describe your academic interests, motivation for applying, and how RITA’s internship aligns with your goals.
Short Essay Questions (max. 1000 characters)
This project investigates whether drone swarms produce unique atmospheric signatures that can be used as detection features in counter-UAS systems. When multiple drones operate in close proximity, their rotor wakes interact to create measurable patterns of turbulence, pressure variation, and acoustic disturbances. Understanding these patterns is a crucial step toward developing detection technologies that do not rely solely on radar, RF emissions, or vision.
Interns will design a small-scale experimental test bench to study how different swarm configurations generate distinctive atmospheric disturbances. The intern will then select appropriate sensing modalities and define environmental and operational conditions necessary for meaningful data collection to develop a preliminary field experiment design.
By the end of the project, the intern will produce a prototype test setup, a dataset of swarm-induced atmospheric measurements, and an assessment of which signatures show potential for use in drone-swarm detection algorithms.
This research project explores the application of causal learning methodologies to enhance trust and explainability in tactical autonomous systems. Interns will work on developing and implementing causal discovery and causal inference techniques specifically designed for unmanned aerial systems (UAS) and other autonomous platforms operating in complex tactical environments. The project addresses a critical challenge in military AI systems: understanding not just correlations in sensor data and mission outcomes, but the underlying causal mechanisms that drive autonomous decision-making. By uncovering causal relationships through discovery algorithms and enabling counterfactual reasoning through causal inference, this work aims to make autonomous systems more interpretable to human operators and more robust when encountering scenarios that differ from their training distributions.
Interns will gain hands-on experience with cutting-edge causal AI techniques while working on problems with direct real-world impact for defense applications. The project involves developing frameworks that integrate multi-modal data sources (imagery, sensor readings, environmental context) with causal graphical models to support explainable AI (XAI) for tactical autonomy. Participants will learn to implement causal discovery algorithms to identify cause-effect relationships in operational data, apply causal inference methods to generate counterfactual explanations for autonomous decisions, and evaluate how these approaches improve human-machine teaming and situational awareness. This interdisciplinary work bridges machine learning, causal reasoning, and military operations research, providing interns with valuable experience in responsible AI development for high-stakes applications where trust and transparency are paramount.
This project explores the use of agentic AI to model competitive drone-team behaviors in a tactical wargame scenario. This project focuses on developing intelligent drone teams that independently learn offensive and defensive strategies, coordinate actions, and adapt to an adversary in real time. The goal is to create a realistic, simulation-based environment for studying autonomous tactics, team coordination, and decision-making under uncertainty.
Interns will gain hands-on experience at the intersection of autonomy, reinforcement learning, and wargaming simulation. Interns will work on ensuring that agents coordinate as a team to defend a region or target, devise cooperative attack plans to penetrate the enemy defense, and try to predict likely opponent actions. Interns will participate in environment design, agent architecture selection, and training pipeline development.
This research project focuses on developing an AI/ML-driven agentic framework to automate and accelerate key research and development (R&D) processes used by scientists and engineers (S&Es) within RITA and defense communities. Modern R&D cycles—literature reviews, prototype development, simulation, and scholarly reporting—are often time-consuming, manual, and slow to adapt to the rapid pace of innovation in artificial intelligence. As global adversaries leverage AI/ML to accelerate capability development, it is critical that research organizations modernize how technical work is performed. A recent study from MIT examined the impact of AI-Augmented R&D and found AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. This project aims to build intelligent, distributed AI agents capable of offloading and streamlining scientific workflows.
Interns will experiment with open-source agent protocols, study emerging research on automated scientific discovery, and help build a functional reference implementation of an agentic R&D automation pipeline. Interns will leave the program with practical experience building AI agents capable of accelerating complex scientific tasks.
Join a community of students and researchers advancing the future of tactical autonomy.
© Research Institute For Tactical Autonomy 2026. All Rights Reserved.
1328 Florida Ave NW, Washington, DC 20009