TARS

TACTICAL AI RESPONSE SYSTEM

Final Project Proposal — Applications & Implications Week

A miniature voice-interactive AI assistant inspired by TARS from Interstellar — physically fabricated, electronically designed, and running a full hybrid AI pipeline. It listens. It responds. It moves. And its humour setting is 75%.

This documentation was created with the help of claude as I am not yet sure about the exact details as the work on the project has yet to be started so these are the supposed details on how it should work. You can click on this link to view the conversation history.

3D Printing Laser Cutting Custom PCB Electronics Design Embedded Firmware Python Pipeline AI / LLM Servo Mechanism Molding & Casting

Contents

  1. What will it do?
  2. Who has done what beforehand?
  3. What will I design?
  4. Materials & components
  5. Where will they come from & how much?
  6. What parts and systems will be made?
  7. What fabrication processes will be used?
  8. Questions that need to be answered
  9. How will it be evaluated?
  10. Implications
  11. When will things happen?

What will it do?

TARS is a tabletop voice-interactive AI assistant housed inside a fabricated enclosure inspired by the TARS robot from the film Interstellar. The physical form is a tall rectangular monolith — approximately 25cm tall — 3D printed in black PETG, with two servo-actuated hinged panels on the front face that open and close in response to conversational state, and a WS2812B LED light bar running along the top that pulses and reacts to audio amplitude.

When idle, TARS sits closed with a slow breathing light animation. When you say "Hey TARS," it detects the wake word, opens its panels slightly, and starts listening. It transcribes your speech locally using Whisper, sends the text to a cloud LLM (Claude or GPT-4o) with a carefully engineered TARS persona system prompt, and speaks the response aloud using Piper neural TTS. Panel motion and LED animation are driven by a separate custom-designed PCB containing a SAMD21 microcontroller, which receives state commands from the Raspberry Pi 3 over I2C.

The personality is specifically tuned to match the TARS character: dry, sardonic, precise, and deeply competent — with an adjustable humour setting accessible through a Flask web interface hosted on the Pi itself, reachable from any browser on the local network.

Wake word
openWakeWord
Transcribe
Whisper tiny
LLM response
Cloud API
Speak
Piper TTS
Animate
SAMD21 PCB

KEY POINT — Speech-to-text and TTS run entirely on-device. Only the LLM inference is sent to the cloud, keeping audio data private and latency manageable.


Who has done what beforehand?

TARS-inspired builds exist in the maker community, but they tend to fall into two categories: static prop replicas with no electronics, or basic Arduino/servo builds that play audio clips. Neither category has produced a fully-functional AI assistant inside a fabricated enclosure that integrates a live LLM pipeline.

The novel contribution of this project is the integration of all layers: fabricated physical form, custom electronics, servo-driven reaction to conversational state, and a specifically engineered AI character — as a single cohesive system built from scratch in a Fab lab.


What will I design?

🖨 Physical — 3D & 2D Design

  • Main body shell — front and back halves in Fusion 360, printed in PETG
  • Internal frame — servo mounts, PCB standoffs, cable routing channels
  • Hinged panel geometry — hinge pin holes, servo horn attachment, travel limits
  • Weighted base — 3D printed positive used to cast a urethane final base
  • Laser-cut acrylic panel inlays — speaker grille pattern in 1.5mm black acrylic

⚡ Electronics

  • Custom PCB in KiCad — SAMD21G18 MCU, servo headers ×2, WS2812B LED output, I2C interface to Pi, SWD programming port, 3.3V regulator, status LED
  • All schematic symbols and footprints verified against Fab lab component library
  • Board milled on Fab lab Roland SRM-20, soldered by hand

💻 Firmware

  • SAMD21 firmware in Arduino/C++ — I2C target mode, PWM servo control, WS2812B animation state machine (IDLE, LISTENING, THINKING, SPEAKING)
  • Servo smooth-move interpolation to avoid jerky panel motion
  • Heartbeat LED for debug visibility

🤖 Software Pipeline

  • Python orchestration script on Pi 3 — ties all stages together
  • openWakeWord integration for always-on detection
  • Whisper.cpp integration for local transcription
  • LLM API call with TARS persona system prompt + conversation memory
  • Piper TTS for neural voice output
  • Flask web interface — humour/honesty sliders, conversation log, wake word toggle

Materials & Components

Electronics

ComponentSpec / PartPurpose
Raspberry Pi 3B+Quad-core 1.4GHz, 1GB RAMMain compute — AI pipeline, web server
ReSpeaker 2-Mic HAT V2.0TLV320AIC3104 codec, 2× MEMS mics, APA102 LEDs, 3.5mm jackAudio input + easy audio output via 3.5mm
Xiao esp32-c3/s348MHz ARM Cortex-M0+, 256KB flashCustom PCB MCU — servo + LED control
SG92R micro servo ×22.5kg/cm, 4.8–6VPanel hinge actuation
WS2812B LED strip30 LEDs/m, ~15cm lengthReactive light bar on body front
AMS1117-3.3 LDO3.3V, 1AMCU power regulation on custom PCB
Passive components100nF caps ×4, 10µF cap, 100µF cap, 1000µF cap, 470Ω, 10kΩ ×2, 330ΩDecoupling, pull-ups, LED protection
4Ω 3W speaker (~40mm)Audio output via ReSpeaker 3.5mm jack + small amp
32GB microSD (Class 10)Pi OS + models

Fabrication Materials

MaterialSpecUsed for
PETG filamentBlack, 1.75mm, ~400gMain body shell, internal frame, panels, base core
Black acrylic sheet1.5mm, ~A4 sizeLaser-cut speaker grille inlays for panels
Urethane resinShore 60A or similarCast final base for weight and finish
Silicone (mold rubber)Platinum-cure, Shore 20ANegative mold for base casting
M3 brass heat-set insertsM3 × 4mm, ×10Body assembly joints
M3 × 8mm screwsBody assembly
M2 steel rod, ~50mmHinge pins for panels
FR1 copper-clad boardSingle-sided, Fab lab stockMilled PCB substrate

Where will they come from & how much will they cost?

~€75
Estimated total electronics cost
~€15
Estimated fabrication materials cost
ItemSourceEst. Cost (€)
Raspberry Pi 3B+Verkkokauppa / Mouser / eBay€35–45
ReSpeaker 2-Mic HAT V2.0Seeed Studio (ships from Germany)€12
Xiao esp32-c3/s3Seeed Studio or AliExpress€5–7
SG92R servos ×2AliExpress€3–5
WS2812B strip (~15cm)AliExpress / leftover from Fab weeks€2–3
Speaker 4Ω 3WAliExpress / salvaged€2–3
32GB microSDLocal / Amazon€6–8
Passives (caps, resistors)Fab lab stock / Mouser€2–3
PETG filament ~400gFab lab stock or Prusament€8–10
Black acrylic sheetFab lab stock€2–4
Urethane + siliconeSmooth-On / Fab lab stock€5–8
Hardware (screws, inserts, rod)Local hardware store€3–5
FR1 PCB stockFab lab stock€2–3
Total estimate€87–116

NOTE — PCB milling, 3D printing, and laser cutting are done in the Fab lab using available machines. The cost table covers materials only. If the Pi 3B+ is unavailable, a Pi Zero 2W (~€18) reduces cost significantly with some firmware and pipeline adjustments.


What parts and systems will be made?

The principle is make rather than buy wherever possible. The following are designed and fabricated from scratch:

SystemMade or BoughtNotes
Body enclosure (shell, frame, panels)✅ Made — 3D printed PETGDesigned in Fusion 360, printed in Fab lab
Base✅ Made — cast urethaneSilicone mold from 3D printed positive
Panel inlays (grille pattern)✅ Made — laser-cut acrylic2D vector design, cut in Fab lab
Custom PCB (servo + LED controller)✅ Made — milled & solderedKiCad schematic + layout, milled on Roland SRM-20
MCU firmware✅ Made — written from scratchArduino/Espressif on esp32-c3/s3
AI pipeline software✅ Made — Python scriptsWhisper → API → Piper → I2C animation
Web config interface✅ Made — Flask + HTML/CSSServed locally on Pi WiFi
Raspberry Pi 3B+🛒 BoughtCore compute platform
ReSpeaker HAT🛒 BoughtMic array + audio codec
Servos, speaker, LEDs🛒 BoughtOff-the-shelf actuators

What fabrication processes will be used?

Additive

  • FDM 3D printing (PETG) — body shell halves, internal frame, hinged panels, base core. Key settings: 4 perimeters, 25% gyroid infill, 0.15mm layer height for panel faces.

Subtractive

  • PCB milling — Roland SRM-20 milling FR1 copper-clad. 0.4mm V-bit for traces, 0.8mm end mill for holes and board outline.
  • Laser cutting — panel grille inlays from 1.5mm black acrylic. Vector paths designed in Inkscape or Fusion 360.

Molding & Casting

  • Silicone mold making — platinum-cure silicone poured over 3D printed base positive.
  • Urethane casting — final base poured in two-part urethane with steel nuts embedded for weight.

Electronics

  • PCB design — KiCad schematic + layout, DRC-verified, exported as Gerber/SVG for milling.
  • Hand soldering — SMD passives (0402/0805), SOT-223 regulator, XIAO module, THT connectors.
  • Firmware flashing — via SWD header using J-Link or CMSIS-DAP programmer.

Questions that need to be answered

Technical open questions

  • Will Whisper tiny on Pi 3 be fast enough for a natural conversation cadence? Target: under 3 seconds transcription. May need to test whisper.cpp vs the Python port.
  • Can the ReSpeaker HAT and the I2S speaker amp coexist on the same Pi I2S bus, or does one need to be routed through the 3.5mm jack? Need to confirm ALSA configuration.
  • What is the maximum servo current draw when both SG92R units stall simultaneously, and does this cause the Pi's 5V rail to drop? May need a dedicated servo power line.
  • Will PETG sand and prime cleanly enough for a finish that looks intentional rather than printed? Need to test with filler primer and wet sanding.
  • What is the practical PCB minimum trace width achievable on the Fab lab's SRM-20 for this design? Target: 0.4mm traces, 0.4mm clearance.
  • How much context history can be included in the LLM prompt before API cost and latency become impractical? Need to test with 4, 6, and 10-turn memory windows.

RISK — The hinge mechanism is the highest mechanical risk area. Servo horn attachment point geometry and panel travel angle need iteration — expect at least 3 print cycles before the mechanism moves cleanly without binding.


How will it be evaluated?

The project will be evaluated against the Fab Academy final project criteria and the following self-defined success metrics:

CriterionPass condition
Wake word detectionReliably triggers within 1 second at 1–2 metre distance in normal room conditions
Transcription accuracyCorrectly transcribes clearly spoken sentences ≥90% of the time
Response latencyAudible response begins within 6 seconds of end of user speech
Persona consistencyResponses are dry, concise, and recognisably TARS-like across a 10-turn demo conversation
Panel animationPanels open and close smoothly on state transitions without servo stall or binding
LED animationLEDs change correctly between IDLE / LISTENING / THINKING / SPEAKING states
Web interfaceHumour slider and conversation log accessible and functional from mobile browser on same network
Custom PCBBoard milled, soldered, and functioning — all I2C commands execute correctly
Fabrication qualityEnclosure looks finished, intentional, and clean — not a raw print
Fab Academy requirementsIncorporates 2D design, 3D design, additive fabrication, subtractive fabrication, electronics design, microcontroller programming, and system integration

Implications

Technical implications

This project demonstrates that a fully functional, character-driven AI assistant can be built entirely from fabricated components at Fab lab scale, for under €120 in materials. The hybrid pipeline architecture — local STT and TTS, cloud LLM — is a practical template for privacy-respecting voice AI that doesn't require a data centre. The custom PCB separating physical actuation from compute is a reusable pattern for any embedded AI project that needs reactive physical outputs.

Design implications

Physical form matters for AI interaction. A voice assistant that moves — even minimally — creates a qualitatively different experience than a speaker sitting on a shelf. The enclosure is not decoration; it is part of the interface. This project explores whether the perceived intelligence and personality of an AI system can be amplified through physical design, not just prompt engineering.

Social and ethical implications

Giving an AI assistant a strong, specific character raises genuine questions about anthropomorphisation and how people form relationships with machines. TARS is designed to be clearly a machine — the angular form, mechanical motion, and dry delivery are all deliberate signals that this is not trying to be human. That design choice is an ethical one: transparency about what it is matters. On the data side, keeping STT and TTS local ensures no audio ever leaves the device, addressing a real concern with commercial voice assistants.

Broader implications

As LLM inference becomes cheaper and more accessible, the barrier to building character-specific AI assistants drops to near zero on the software side. The remaining barrier is the physical form — and Fab labs are precisely positioned to address that. Projects like this are an early prototype of a world where personal AI assistants are not mass-produced products but individually designed and fabricated objects that reflect the identity and values of the person who built them.


When will things happen?

Now → Week 1 of build

Order all parts. Flash Pi 3 OS. Wire ReSpeaker HAT. Get Piper TTS producing audio through a speaker. Confirm I2S audio pipeline works end-to-end.

Week 2

Install and test Whisper.cpp with tiny.en model. Write and test the LLM API call with TARS system prompt. Chain all three stages: mic → transcribe → API → speak. Add openWakeWord.

Week 3

Wire WS2812B strip and servos temporarily to Pi GPIO. Write Python animation trigger functions. Confirm all four states (IDLE, LISTENING, THINKING, SPEAKING) work correctly driven from the pipeline.

Week 4

Design custom PCB in KiCad — schematic, layout, DRC. Mill, solder, and flash SAMD21 firmware. Verify I2C communication between Pi and PCB. Replace temporary GPIO servo/LED wiring with PCB.

Week 5

Model body geometry in Fusion 360. Print internal frame first, test-fit all components. Iterate until electronics, Pi, speaker, and HAT all sit cleanly. Print outer shell halves.

Week 6

Sand, prime, and finish the printed body. Laser-cut acrylic panel inlays and glue in place. Test panel hinge travel with servos. Produce silicone mold and cast urethane base.

Week 7 — Final integration

Fit all electronics into enclosure. Final cable management. Build Flask web interface. Tune TARS persona prompt with conversation memory. Document, photograph, and record demo video.

APPROACH — Software first, body last. The enclosure is designed around a known-working electronics stack, not the other way around. If the pipeline isn't talking by end of Week 2, that is the blocker to resolve before touching Fusion 360.


← Back to Main Page