A miniature voice-interactive AI assistant inspired by TARS from Interstellar — physically fabricated, electronically designed, and running a full hybrid AI pipeline. It listens. It responds. It moves. And its humour setting is 75%.
This documentation was created with the help of claude as I am not yet sure about the exact details as the work on the project has yet to be started so these are the supposed details on how it should work. You can click on this link to view the conversation history.
TARS is a tabletop voice-interactive AI assistant housed inside a fabricated enclosure inspired by the TARS robot from the film Interstellar. The physical form is a tall rectangular monolith — approximately 25cm tall — 3D printed in black PETG, with two servo-actuated hinged panels on the front face that open and close in response to conversational state, and a WS2812B LED light bar running along the top that pulses and reacts to audio amplitude.
When idle, TARS sits closed with a slow breathing light animation. When you say "Hey TARS," it detects the wake word, opens its panels slightly, and starts listening. It transcribes your speech locally using Whisper, sends the text to a cloud LLM (Claude or GPT-4o) with a carefully engineered TARS persona system prompt, and speaks the response aloud using Piper neural TTS. Panel motion and LED animation are driven by a separate custom-designed PCB containing a SAMD21 microcontroller, which receives state commands from the Raspberry Pi 3 over I2C.
The personality is specifically tuned to match the TARS character: dry, sardonic, precise, and deeply competent — with an adjustable humour setting accessible through a Flask web interface hosted on the Pi itself, reachable from any browser on the local network.
KEY POINT — Speech-to-text and TTS run entirely on-device. Only the LLM inference is sent to the cloud, keeping audio data private and latency manageable.
TARS-inspired builds exist in the maker community, but they tend to fall into two categories: static prop replicas with no electronics, or basic Arduino/servo builds that play audio clips. Neither category has produced a fully-functional AI assistant inside a fabricated enclosure that integrates a live LLM pipeline.
The novel contribution of this project is the integration of all layers: fabricated physical form, custom electronics, servo-driven reaction to conversational state, and a specifically engineered AI character — as a single cohesive system built from scratch in a Fab lab.
| Component | Spec / Part | Purpose |
|---|---|---|
| Raspberry Pi 3B+ | Quad-core 1.4GHz, 1GB RAM | Main compute — AI pipeline, web server |
| ReSpeaker 2-Mic HAT V2.0 | TLV320AIC3104 codec, 2× MEMS mics, APA102 LEDs, 3.5mm jack | Audio input + easy audio output via 3.5mm |
| Xiao esp32-c3/s3 | 48MHz ARM Cortex-M0+, 256KB flash | Custom PCB MCU — servo + LED control |
| SG92R micro servo ×2 | 2.5kg/cm, 4.8–6V | Panel hinge actuation |
| WS2812B LED strip | 30 LEDs/m, ~15cm length | Reactive light bar on body front |
| AMS1117-3.3 LDO | 3.3V, 1A | MCU power regulation on custom PCB |
| Passive components | 100nF caps ×4, 10µF cap, 100µF cap, 1000µF cap, 470Ω, 10kΩ ×2, 330Ω | Decoupling, pull-ups, LED protection |
| 4Ω 3W speaker (~40mm) | — | Audio output via ReSpeaker 3.5mm jack + small amp |
| 32GB microSD (Class 10) | — | Pi OS + models |
| Material | Spec | Used for |
|---|---|---|
| PETG filament | Black, 1.75mm, ~400g | Main body shell, internal frame, panels, base core |
| Black acrylic sheet | 1.5mm, ~A4 size | Laser-cut speaker grille inlays for panels |
| Urethane resin | Shore 60A or similar | Cast final base for weight and finish |
| Silicone (mold rubber) | Platinum-cure, Shore 20A | Negative mold for base casting |
| M3 brass heat-set inserts | M3 × 4mm, ×10 | Body assembly joints |
| M3 × 8mm screws | — | Body assembly |
| M2 steel rod, ~50mm | — | Hinge pins for panels |
| FR1 copper-clad board | Single-sided, Fab lab stock | Milled PCB substrate |
| Item | Source | Est. Cost (€) |
|---|---|---|
| Raspberry Pi 3B+ | Verkkokauppa / Mouser / eBay | €35–45 |
| ReSpeaker 2-Mic HAT V2.0 | Seeed Studio (ships from Germany) | €12 |
| Xiao esp32-c3/s3 | Seeed Studio or AliExpress | €5–7 |
| SG92R servos ×2 | AliExpress | €3–5 |
| WS2812B strip (~15cm) | AliExpress / leftover from Fab weeks | €2–3 |
| Speaker 4Ω 3W | AliExpress / salvaged | €2–3 |
| 32GB microSD | Local / Amazon | €6–8 |
| Passives (caps, resistors) | Fab lab stock / Mouser | €2–3 |
| PETG filament ~400g | Fab lab stock or Prusament | €8–10 |
| Black acrylic sheet | Fab lab stock | €2–4 |
| Urethane + silicone | Smooth-On / Fab lab stock | €5–8 |
| Hardware (screws, inserts, rod) | Local hardware store | €3–5 |
| FR1 PCB stock | Fab lab stock | €2–3 |
| Total estimate | €87–116 | |
NOTE — PCB milling, 3D printing, and laser cutting are done in the Fab lab using available machines. The cost table covers materials only. If the Pi 3B+ is unavailable, a Pi Zero 2W (~€18) reduces cost significantly with some firmware and pipeline adjustments.
The principle is make rather than buy wherever possible. The following are designed and fabricated from scratch:
| System | Made or Bought | Notes |
|---|---|---|
| Body enclosure (shell, frame, panels) | ✅ Made — 3D printed PETG | Designed in Fusion 360, printed in Fab lab |
| Base | ✅ Made — cast urethane | Silicone mold from 3D printed positive |
| Panel inlays (grille pattern) | ✅ Made — laser-cut acrylic | 2D vector design, cut in Fab lab |
| Custom PCB (servo + LED controller) | ✅ Made — milled & soldered | KiCad schematic + layout, milled on Roland SRM-20 |
| MCU firmware | ✅ Made — written from scratch | Arduino/Espressif on esp32-c3/s3 |
| AI pipeline software | ✅ Made — Python scripts | Whisper → API → Piper → I2C animation |
| Web config interface | ✅ Made — Flask + HTML/CSS | Served locally on Pi WiFi |
| Raspberry Pi 3B+ | 🛒 Bought | Core compute platform |
| ReSpeaker HAT | 🛒 Bought | Mic array + audio codec |
| Servos, speaker, LEDs | 🛒 Bought | Off-the-shelf actuators |
RISK — The hinge mechanism is the highest mechanical risk area. Servo horn attachment point geometry and panel travel angle need iteration — expect at least 3 print cycles before the mechanism moves cleanly without binding.
The project will be evaluated against the Fab Academy final project criteria and the following self-defined success metrics:
| Criterion | Pass condition |
|---|---|
| Wake word detection | Reliably triggers within 1 second at 1–2 metre distance in normal room conditions |
| Transcription accuracy | Correctly transcribes clearly spoken sentences ≥90% of the time |
| Response latency | Audible response begins within 6 seconds of end of user speech |
| Persona consistency | Responses are dry, concise, and recognisably TARS-like across a 10-turn demo conversation |
| Panel animation | Panels open and close smoothly on state transitions without servo stall or binding |
| LED animation | LEDs change correctly between IDLE / LISTENING / THINKING / SPEAKING states |
| Web interface | Humour slider and conversation log accessible and functional from mobile browser on same network |
| Custom PCB | Board milled, soldered, and functioning — all I2C commands execute correctly |
| Fabrication quality | Enclosure looks finished, intentional, and clean — not a raw print |
| Fab Academy requirements | Incorporates 2D design, 3D design, additive fabrication, subtractive fabrication, electronics design, microcontroller programming, and system integration |
This project demonstrates that a fully functional, character-driven AI assistant can be built entirely from fabricated components at Fab lab scale, for under €120 in materials. The hybrid pipeline architecture — local STT and TTS, cloud LLM — is a practical template for privacy-respecting voice AI that doesn't require a data centre. The custom PCB separating physical actuation from compute is a reusable pattern for any embedded AI project that needs reactive physical outputs.
Physical form matters for AI interaction. A voice assistant that moves — even minimally — creates a qualitatively different experience than a speaker sitting on a shelf. The enclosure is not decoration; it is part of the interface. This project explores whether the perceived intelligence and personality of an AI system can be amplified through physical design, not just prompt engineering.
Giving an AI assistant a strong, specific character raises genuine questions about anthropomorphisation and how people form relationships with machines. TARS is designed to be clearly a machine — the angular form, mechanical motion, and dry delivery are all deliberate signals that this is not trying to be human. That design choice is an ethical one: transparency about what it is matters. On the data side, keeping STT and TTS local ensures no audio ever leaves the device, addressing a real concern with commercial voice assistants.
As LLM inference becomes cheaper and more accessible, the barrier to building character-specific AI assistants drops to near zero on the software side. The remaining barrier is the physical form — and Fab labs are precisely positioned to address that. Projects like this are an early prototype of a world where personal AI assistants are not mass-produced products but individually designed and fabricated objects that reflect the identity and values of the person who built them.
Order all parts. Flash Pi 3 OS. Wire ReSpeaker HAT. Get Piper TTS producing audio through a speaker. Confirm I2S audio pipeline works end-to-end.
Install and test Whisper.cpp with tiny.en model. Write and test the LLM API call with TARS system prompt. Chain all three stages: mic → transcribe → API → speak. Add openWakeWord.
Wire WS2812B strip and servos temporarily to Pi GPIO. Write Python animation trigger functions. Confirm all four states (IDLE, LISTENING, THINKING, SPEAKING) work correctly driven from the pipeline.
Design custom PCB in KiCad — schematic, layout, DRC. Mill, solder, and flash SAMD21 firmware. Verify I2C communication between Pi and PCB. Replace temporary GPIO servo/LED wiring with PCB.
Model body geometry in Fusion 360. Print internal frame first, test-fit all components. Iterate until electronics, Pi, speaker, and HAT all sit cleanly. Print outer shell halves.
Sand, prime, and finish the printed body. Laser-cut acrylic panel inlays and glue in place. Test panel hinge travel with servos. Produce silicone mold and cast urethane base.
Fit all electronics into enclosure. Final cable management. Build Flask web interface. Tune TARS persona prompt with conversation memory. Document, photograph, and record demo video.
APPROACH — Software first, body last. The enclosure is designed around a known-working electronics stack, not the other way around. If the pipeline isn't talking by end of Week 2, that is the blocker to resolve before touching Fusion 360.
← Back to Main Page