About Me

Who Am I?

I'm Alonso Cano, I'm a full-stack machine learning and robotics engineer with experience building large-scale data systems, autonomy, cloud infrastructure, and frontend development.

At Carnegie Mellon University, I've been leading the development of high-throughput pipelines for aviation trajectory prediction datasets—scaling to 20 TB—and building full-stack tools for robotic systems using Python, React, Docker, and cloud services. Before that, at PyLC, I deployed LLM-driven assistants and data extraction systems on AWS Bedrock, ECS, and ECR, cutting end-to-end processing times from hours to seconds

Education

Education

Université Sorbonne Sciences, Paris VI

2017 - 2019

My master's degree was on Intellingent Systems, Image and Sound processing

Université Pierre et Marie Curie (UPMC), Paris VI

2014 - 2017

Coming from Mexico I took the challenge of studying abroad in one of the most prestigious universities of France. Knowing very little French I was able to overcome this challenge. Now I can proudly say that I succeeded and I can speak French fluently.

  • Shanghai, China at Tonji University
    2013
  • Paris, France at Jacques Monod High School
    2010 - 2011
  • Victoria, Canada at Glenlyon-Norfolk School
    2007 - 2008
Experience

Work Experience

Carnegie Mellon University

cmu

Project: Large-scale Dataset and Foundation Models for Trajectory Prediction in Aviation

  • Contributed to developing and open-sourcing a large-scale trajectory prediction benchmark for aviation GitHub and HuggingFace
  • Developed parallelized Python processing pipeline for a large-scale trajectory dataset (~20 TB), reducing processing time by over 60% (from 40 to 15 hours) for scalable analysis and processing
  • Assisted in training experiment sweeps of transformer models, achieving state-of-the-art performance on benchmark trajectory prediction datasets using Pytorch-Lightning
  • Developed data-splitting algorithms to avoid data leakage and ensure deterministic splitting for fair reproducibility
  • Co-authored and submitted a research paper to NeurIPS 2025 Datasets and Benchmarks Track, detailing the dataset, experiments, and model design

Project: Generative Robot Manipulation

  • Engineered workflow automation tools for a robotic painting system that uses learned stroke-generation models to reproduce input images
  • Designed and 3D-printed a calibrated workspace platform in Fusion 360, automating calibration and reducing setup time by over 70% (2 hours to 30 minutes)
  • Built a full-stack GUI with Django, React, and Docker for seamless user-robot arm interaction
  • Trained parameter-to-visual-output mapping models to improve stroke accuracy, motion expressiveness, and closed-loop feedback control

Project: Contact-rich Human-Robot Interactions using Propioception and Soft Actuation

  • Designed and built a social robot prototype with an inflatable, soft outer layer enabling safe, contact-rich navigation and interaction with humans
  • Integrated a ROS-based navigation stack using the CMU Autonomous Exploration tool, enabling autonomous navigation and mapping on a custom-built mobile robot
  • Engineered and deployed a proprioception algorithm based on a 3D deformation classification algorithm using Open3D and PyTorch, allowing the robot to detect touch types and trigger context-specific responses
  • Presented a demo of the prototype at the IEEE International Conference on Robotics and Automation (ICRA)

Ixaya - PyLC

ixaya

May 2020 - July 2024

  • Engineered and deployed a RAG-powered virtual assistant on AWS Bedrock, ECS, and ECR, and SerpAPI, enabling instant access to contextual portfolio data
  • Designed and iteratively refined prompt-engineering workflows to optimize LLM outputs for risk scenario analysis, improving output accuracy and consistency by over 95% (5 hours to 10 minutes) and enhancing decision making
  • Automated backend workflows and optimized data extraction from documents and web sources, with AWS EC2 and Azure AI, cutting forms completions by over 95% (15 minutes to 25 seconds)
  • Developed scalable APIs with Flask, AWS ECR, and ECS for dynamic production workload handling
  • Implemented deep learning models for document classification, image segmentation, and text extraction

Synfiny Advisors

synfiny

Oct 2021 - Oct 2022

  • The objective is to create bots in order to automate Miniso's product placement
  • The RPA proceses are created with Automation Anywhere
  • The bots main requirements where to upkeep financial reports, item prices, quantities and internal data
  • The automation sped up proceses from around 2 hours to 15 min and eliminated repetitive tasks done from end users

AFD.TECHNOLOGIES

afd
  • The goal of the project was to detect alarm anomalies on telecommunication services provided by Orange France
  • Used unsupervised algorithms for anomaly detection of alarm systems
  • Evaluated unsupervised algorithms with statistical criteria in order to determine the best-one according to data set
  • The models were implemented for testing phase
  • The time that the supervision team took to detect certain anomalies was reduced
My Projects

Recent Projects

Hand gesture classifier

Ysing Deep Learning and OpenCV I created a classifer that allows to write some text on screen

100 49

Licence plate detection

License plate detection and automatic blurring using Pytorch and Opencv

100 49

Servo eyes

Servomotor-controlled eyes perform face detection and following from a video stream

100 49

A* Search

Using A* Search the algorithm is able to find a path from the green square to the red square

HSV Color Tracking

Using HSV colorspace, using OpenCV and Python, I am able to track a yellow ball