Ryan Friberg
Machine Learning Researcher | Software Engineer

I'm currently a computer science MS student at Columbia University (graduating December 2023) but I additionally work as a part-time research intern at Odyssey Therapeutics where I mainly focus on building a generative diffusion-based deep learning pipeline for drug discovery. In March 2022, I graduated summa cum laude from the University of Chicago with a BS in computer science with a specializtion in machine learning and a BA in astrophysics.

My research interests lie in computer vision, natural language processing, and robotics with outside interests in topics such as computer graphics and security.

Resume | Linkedin

profile photo

Please note: This website is still under construction and is actively being updated.

Project Experience

Current Ongoing Projects:

Context-based Image Retrieval
I am currently working on implementing a context-based image search and retrieval system operating on the Sloan Digital Sky Survey with comparisons between a custom transformer architecture model and CNN-based model for visual feature extraction.

Pediatric Surgery AR System
In partnership with doctors from Columbia University's School of Medicine, I am additionally working on implementing an AR application with the goal of aiding physicians in pediatric lumbar punctures.

Past Projects:

Image-to-Music: The "Hans Zimmer" Model
I built a deep learning pipeline that consisted of finetuning a ViT on scraped images of diverse scenes that depict certain emotions (calm, gloomy, happy, etc.) and leveraged the scraping search queries to assign the metadata/labels for any given image. The pipeline scraped YouTube for audio depicting the same set of emotions, using search queries as the method of mapping from one modality to the other. The audio was clipped into small chunks and converted to spectrogram image data. The final part of the pipeline was fine-tuning Stable Diffusion to directly generate spectrogram images given a sequence of unique keywords/emotions.

code (numpy, torch, huggingface), paper (unsubmitted/unpublished)

Recreation of End-to-End Robotics Perception and Motion Planning Systems
I implemented a U-Net semantic segmentation network for detection of objects in the robot's pick-and-place task environment, leveraged algorithms such as ICP to perform pose estimation, and built both visual affordance and action regression grasp prediction systems. I added path-planning and obstacle avoidance functionality building off of the RTT algorithm.

(pybullet, numpy, torch), no associated paper

Context-Limited Optical Character Recognition
I implemented a custom resnet architecture to train it on image data containing individual letters (both upper and lowercase) and all numerical digits. The main research question was, using a large conglomerate custom dataset consisting of a diverse, but only black and white character images, could a network still extrapolate text information from raw images containing text characters in any visual context. The pipeline used Google's Pytesseract for object detection and the custom network for recognition.

code (numpy, torch), paper (unsubmitted/unpublished)

Resource Restricted Deep Reinforcement Learning
I created a Deep-Q reinforcement learning network using experience replay on a state-space restricted memory bank. In this project, I also experimented experimented with various reward systems and training configurations with the goal of demonstrating that emposing opimital restrictions on the network (to avoid spending time training on superfluous actions or game-state information) can allow RL to improve gameplay even in heavily resource restricted environments on reasonable timescales (hours). The model was trained on Frogger for the Atari 2600, Super Mario Bros. for the NES, and Flappy bird fom iOS using OpenAI Gym, with demonstrable gameplay improvements in each over the course of training.

code (numpy, torch), paper (unsubmitted/unpublished)

Contextualized Medication Event Extraction
This project was comprised of a deep-learning NLP pipeline in accordance with the Havard Medicine National NLP Clinal Challenge (unsubmitted to the competition due to logistical reasons). In it, I experimented with many transformer-based large language models (BERT, DistilBERT, GPT-2, etc.) on the task of processing and extracting relevant medical information from raw physican note text data. The implmentation achieved success in areas such as named entity recognition (NER) and various contexts in which diagnoses were given.

code (numpy, torch, huggingface), paper (unsubmitted/unpublished)

Misc. Smaller Machine Learning Projects
In addition to the projects listed above, I have ample experience with various other smaller machine learning projects. These have mainly taken the form of implementing specific network architectures from scratch such as LeNet, LSTMs, bayesian networks, VAEs, and GANs for purposes across both computer vision and NLP. Please find a few examples below:

code (numpy, torch) Vision models
code (numpy, torch, pytorch-lightning) Language models