Pablo Tirado Hidalgo

Hello 👋

I'm Pablo Tirado Hidalgo

Roboticist | Computer Vision Engineer | Maker

Building intelligent systems that simplify everyday life through robotics, computer vision, and edge AI

About Me

I'm a robotics engineer and computer scientist driven by a simple yet powerful question: How can I make one aspect of my life easier every day? This philosophy guides my work in robotics, computer vision, and edge AI, where I design and build intelligent systems that solve real-world problems.

My passion for making things extends from building autonomous robots with SLAM navigation and visual language models to creating practical tools like my Pokemon card arbitrage assistants that help me buy at the right price. I firmly believe in learning by doing. I've built a 3D printer from scratch using CD players, designed custom parts in Fusion360 for various projects (especially my robot), and I'm constantly iterating on new ideas.

My academic background combines a B.S. in Computer Science from Rutgers University with minors in Earth & Planetary Science and Physics - a unique blend that reflects my diverse curiosity. Since childhood, I've been fascinated by astronomy and mineralogy, interests I maintain to this day. When I'm not working on robots or writing code, you'll find me exploring DIY projects, gaming, or diving deep into my card collecting hobby by making new resources I can use and share with friends.

I thrive especially in research environments where I can explore multiple approaches to solving complex problems. My professional experience spans robotics, computer vision specializing in object detection, and edge computing all complemented by my computer science expertise that allows me to tackle projects I wouldn't have been exposed to otherwise.

Featured Projects

Building solutions that matter

VILA-Powered Semantic Mapping & Navigation

Autonomous Exploration meets Modern Vision-Language Models (VLMs)

The Goal: To build a robot that doesn't just see a "map", but understands its environment. By combining ROS2 Humble, 2D LiDAR SLAM, and the VILA 2.7B Vision-Language Model, this robot autonomously maps indoor spaces and labels them for high-level semantic navigation.

Built with a Nav2 frontier-based explorer for zero-intervention mapping, a custom mecanum_odometry_node, and a multi-stage Docker deployment for ARM64/Jetson. It achieves ~700ms inference latency and aggregates multiple AI inferences at specific coordinates for spatial consensus mapping.

ROS2Nav2SLAM ToolboxVILA 2.7BNVIDIA Jetson
[Visual: Time-lapse building map in RViZ]
[Visual: Camera vs Map View]

CFinder - WebApp for Pokemon

A website to help track your wishlist cards and where to purchase them at the cheapest price, this includes raw and graded cards.

ReactSupabaseREST APIOAuth

eBay Price Checker Extension

Chrome extension that checks lowest Buy It Now prices and market prices from PriceCharting.com directly on eBay auction pages, helping me determine if graded or raw cards are worth buying.

JavaScriptChrome ExtensionAPI Integration

Arduino VR Tracker

Inertial tracking system with Arduino, MPU-6050 gyroscope, and custom filtering for low-cost VR body tracking in game engines.

ArduinoC++Sensor FusionMPU-6050

Technical Expertise

Programming Languages

Python
C++
Java
JavaScript
TypeScript
C
⚙️
Assembly

Frameworks & Libraries

React
Next.js
Tailwind
ROS2
OpenCV
TensorFlow
PyTorch

Tools & Platforms

Git
GitHub
Nvidia
Nvidia Jetson
Fusion360
Fusion360
🖨️
3D Printing
📷
3D Scanning
Arduino

Let's Build Something Together

Interested in robotics, computer vision, or collaborating on a project? I'd love to hear from you.