Five AI capstone project ideas with real infrastructure
Skip the toy demos. Here are five capstone project ideas using real AI workflow automation infrastructure that will impress professors and hiring managers.
Most AI capstone projects are forgettable. A chatbot that answers FAQ questions. A sentiment classifier trained on movie reviews. Stuff that made sense in 2020 but won’t turn a single head in a 2026 hiring committee. Here’s how Tallyfy approaches workflow automation for teams who need real infrastructure, not demos.
Workflow Automation Software Made Easy & Simple
Summary
- Most capstone projects use toy infrastructure - Building on free-tier APIs with no production patterns won’t differentiate you when nearly 90% of recruiters want evidence of real problem-solving
- AI agents need workflows to follow - Everyone’s building agents, but without structured sequential, parallel, and evaluation-loop patterns, they’re just chatbots with extra steps
- These five projects use MCP and real workflow automation - Each idea connects AI agents to production-grade infrastructure through Tallyfy’s 40+ MCP server tools, giving you something tangible to demo
- Students get Tallyfy free for 2 years - Contact us at /contact/ with your .edu email and skip paying for the infrastructure that makes these projects possible
Problem with most capstone projects
I’ll be blunt. The vast majority of AI capstone projects I’ve seen are variations on the same theme: take a pre-trained model, wrap a Streamlit UI around it, and call it a day.
That approach had legs three years ago. Not anymore. Hiring managers have caught on. NACE’s research consistently shows that employers want to see candidates who solved real problems using real systems — not students who fine-tuned a model on Kaggle data and deployed it to a free Hugging Face space. The gap between “I built a cool demo” and “I built something that could run in production” is where most graduating students fall flat. What makes a capstone project actually impressive in 2026 is connecting an AI agent to real infrastructure — not a mock API, not a local database pretending to be a company, but real tools, real protocols, real workflow patterns. The professors who’ve been reviewing capstone submissions for years can spot the difference between something built on toy scaffolding and something that could survive contact with actual users. And so can hiring managers, who’ve seen enough Streamlit wrappers to last a lifetime.
That’s where MCP — the Model Context Protocol — changes the game. Anthropic introduced it in late 2024, and by now IBM, Microsoft, OpenAI, and Google all support it. MCP gives AI agents a standardized way to connect to external systems. Think of it as USB-C for AI — one protocol, many tools. And Tallyfy has a production MCP server with 40+ tools that your agent can call.
That’s the foundation for every project idea below.
AI-powered employee onboarding system
The problem: New hires at mid-size companies sit through a chaotic first week. HR sends a welcome email. IT gets a ticket three days late. The manager forgets to schedule a one-on-one. The new employee fills out the same forms twice. In our experience with workflow automation, onboarding is the single most common process that breaks down when it’s handled manually.
What you’d build: An AI agent that orchestrates the entire onboarding workflow. When a new hire is added to the system, the agent uses MCP to launch a Tallyfy process, assign tasks to the right departments, monitor completion, and escalate anything that’s stuck. The agent can also answer the new hire’s questions by reading the workflow context — “When do I get my laptop?” becomes a lookup, not an email chain.
Tech stack: Python or TypeScript, Claude or GPT via MCP, Tallyfy MCP server, a simple web dashboard (React or plain HTML).
Tallyfy components: Process templates for onboarding steps, form fields for new hire data, automated task assignment, deadline tracking via MCP tools, BYO AI integration for inline AI responses.
Expected deliverables: Working agent that launches and manages onboarding processes, a demo video showing the full flow, a write-up comparing manual vs. automated completion times, source code on GitHub.
Difficulty: Intermediate. You’ll need to understand workflow patterns for AI agents and MCP basics, but the Tallyfy API handles the heavy lifting.
Resume bullet: “Built an AI-powered employee onboarding system using MCP protocol and Tallyfy workflow infrastructure, reducing simulated onboarding task completion time by 60% through automated routing and escalation.”
Build an automated compliance checker
The problem: Regulated industries spend a staggering amount on compliance. AI21’s research found that organizations spend 15-20% of operational budgets on compliance activities. Most of that cost comes from humans manually reviewing whether processes follow the rules. It’s tedious, error-prone, and doesn’t scale.
What you’d build: An AI agent that evaluates each step in a running workflow against a set of compliance rules. The agent reads the process definition from Tallyfy via MCP, compares it against a rules engine you define (could be as simple as a JSON config or as complex as a small LLM prompt chain), and flags steps that are missing required approvals, lack documentation, or violate timing constraints.
This is the continuous agent evaluation loop pattern in action — the agent doesn’t just check once, it re-evaluates as the process progresses.
Tech stack: Python, Claude or GPT via MCP, Tallyfy MCP server, a rules definition format (JSON or YAML), a reporting dashboard.
Tallyfy components: Process templates with compliance-relevant steps, form captures for audit data, the MCP search and inspection tools, task completion tracking.
Expected deliverables: Working compliance agent, a sample ruleset for a specific regulation (HIPAA, SOX, or GDPR basics), a report showing flagged violations in test processes, source code and documentation.
Difficulty: Advanced. You’ll be combining LLM reasoning with rule-based logic, which means careful prompt engineering and edge case handling.
Resume bullet: “Designed an AI compliance agent that continuously evaluates workflow steps against regulatory rules via MCP integration, achieving 94% accuracy on a HIPAA-derived test suite of 50 process violations.”
Multi-department approval workflow with AI routing
The problem: Approval workflows are a mess in most organizations. A purchase request goes to the wrong manager. A contract sits in someone’s inbox for two weeks because nobody knew it needed legal review. The routing logic lives in someone’s head — and that someone is on vacation.
What you’d build: An AI agent that reads incoming requests, classifies them by type and urgency, and routes them through the correct approval chain. The magic is in the routing — the agent uses an LLM to interpret unstructured request descriptions and decide which departments need to sign off, in what order, and with what deadlines.
Tech stack: Python or TypeScript, Claude or GPT, Tallyfy MCP server, a classification model or prompt chain, a simple submission form.
Tallyfy components: Multiple process templates (one per approval type), conditional routing via if-this-then-that rules, task assignment to groups, deadline automation, the MCP tools for launching and monitoring processes.
Expected deliverables: Working routing agent, at least three distinct approval workflows (financial, legal, IT), a classification accuracy report, a demo showing an ambiguous request being correctly routed, source code.
Difficulty: Intermediate to advanced. The classification piece requires solid prompt engineering. The workflow orchestration is straightforward once you understand MCP project patterns.
Resume bullet: “Created an AI routing agent for multi-department approval workflows using LLM classification and MCP-based process orchestration, correctly routing 89% of ambiguous requests across three department types.”
Turning process observation into live templates
The problem: Nobody reads documentation. I’ve been saying this for years at Tallyfy, and the data backs it up. Teams spend weeks writing SOPs that go stale the moment someone changes a step. The documentation exists in a wiki that nobody visits. Meanwhile, the actual process lives in people’s habits and muscle memory. There’s a brutal gap between how work is documented and how work actually happens.
What you’d build: A system that records how a process is actually performed (via screen recordings, keystroke logs, or structured observation notes), then uses an AI agent to structure that raw data into a formal workflow template — and deploys it directly into Tallyfy via MCP. Record, structure, deploy. Three steps.
Tech stack: Python, a screen recording or activity capture tool (even a simple JSON logger works for a capstone), Claude or GPT for structuring, Tallyfy MCP server for deployment.
Tallyfy components: Template creation via MCP, step creation with form fields, automatic deadline setting, the full process documentation pipeline.
Expected deliverables: A capture tool (even minimal), an AI structuring pipeline that produces valid Tallyfy templates, side-by-side comparison of AI-generated vs. manually-created templates, accuracy metrics, source code.
Difficulty: Advanced. This is genuinely hard. The capture-to-structure pipeline requires creative thinking about what “process observation” even means in your context. But it’s also the kind of project that makes a hiring manager sit up and pay attention.
Resume bullet: “Built an AI process documentation pipeline that converts unstructured activity recordings into deployable workflow templates via MCP, achieving 85% structural accuracy against human-authored baselines across five test processes.”
Workflow analytics dashboard with AI insights
The problem: Most workflow tools show you data. Task completed, task overdue, process running. What they don’t tell you is why. Why does step 3 always bottleneck on Tuesdays? How come processes started by the sales team take 40% longer than ones started by operations? Why did completion rates drop last month? The data is there. The insight is missing.
What you’d build: An analytics dashboard that pulls workflow data from Tallyfy via MCP, runs it through an AI analysis pipeline, and surfaces actionable insights. Not just charts — explanations. “Step 4 has a median completion time of 3.2 days, which is 2.1x the deadline. The primary bottleneck correlates with assignee workload on Mondays and Tuesdays. Recommendation: split the step or add a parallel reviewer.”
Tech stack: Python, Tallyfy MCP server, a visualization library (Plotly, Chart.js, or D3), Claude or GPT for insight generation, a web frontend.
Tallyfy components: The MCP analytics and search tools, process health inspection, task completion data, user workload data.
Expected deliverables: Working dashboard with at least five distinct insight types, a comparison between raw analytics and AI-augmented insights, user testing feedback (even from classmates playing the role of operations managers), source code.
Difficulty: Intermediate. The MCP data retrieval is straightforward. The interesting challenge is prompt engineering the AI to produce genuinely useful insights rather than stating the obvious.
Resume bullet: “Developed an AI-augmented workflow analytics dashboard using MCP integration, generating natural language insights that identified three previously undetected bottleneck patterns across 200+ simulated process runs.”
Why these projects work for your career
Here’s the mega trend you need to understand: AI is not the problem — the absence of structured processes is. But nobody’s building the workflows they need to follow. AI without process infrastructure is just a chatbot doing tricks. The students who get this — who build projects connecting AI to real operational systems — are the ones who’ll stand out.
I’m probably biased, having spent years building Tallyfy. But the pattern is real and bigger than any single product. Red Hat, Microsoft, and every major cloud provider are building MCP support into their platforms. Process-aware AI isn’t a niche — it’s becoming the default architecture.
In our conversations with operations teams, the question has shifted from “should we use AI?” to “how do we give AI the right processes to follow?” Your capstone project can answer that question with working code.
And here’s the practical bit: students get Tallyfy free for 2 years. Just reach out at /contact/ with your .edu email. No credit card. No time-limited trial. Two full years of production infrastructure to build on.
That’s enough time to finish your capstone, land a job, and still have the project running as a portfolio piece during your first year of employment.
About the Author
Amit is the CEO of Tallyfy. He is a workflow expert and specializes in process automation and the next generation of business process management in the post-flowchart age. He has decades of consulting experience in task and workflow automation, continuous improvement (all the flavors) and AI-driven workflows for small and large companies. Amit did a Computer Science degree at the University of Bath and moved from the UK to St. Louis, MO in 2014. He loves watching American robins and their nesting behaviors!
Follow Amit on his website, LinkedIn, Facebook, Reddit, X (Twitter) or YouTube.
Automate your workflows with Tallyfy
Stop chasing status updates. Track and automate your processes in one place.