Our First Hackathon
In April 2025, Perforated AI completed its inaugural Neural Network Optimization Hackathon at the Carnegie Mellon Swartz Center, bringing together students and ML practitioners from across Pittsburgh to explore cutting-edge optimization techniques. With $6,000 in prizes and a revolutionary new approach to neural network design, the event showcased how artificial dendrites can dramatically improve model performance and efficiency.
The Challenge: Beyond Traditional Optimization
As ML researchers, we typically rely on three main tools: Parameter Tuning, Dataset Curation, and Model Design. However, these approaches often hit diminishing returns after initial efforts, and designing new models carries the risk of no improvement over existing architectures. This hackathon was the first time sharing our revolutionary Perforated Backpropagation™ technology with external users.
What Made This Hackathon Unique
Unlike traditional hackathons, participants were encouraged to bring pre-built PyTorch pipelines already training on their datasets. The focus was entirely on optimization rather than building from scratch. This approach allowed teams to demonstrate improvements on projects they were already working on.
Outstanding Results: The Winners
🥇 First Place ($3,000): Natural Language Processing Excellence
Team: Evan Davis from Skim AI Technologies
Evan demonstrated the power of Perforated Backpropagation on BERT language models, achieving accuracy improvements ranging from 3.3% to 16.9% with only 0.01% additional parameters. When optimizing for compression his 90% reduced models achieved 38x cheaper inference costs and on restricted edge hardware processed tokens at 158x the speed of the original.
"What impressed us most was how quickly we were able to implement and scale the technology. In just one week of experimentation, we had it working effectively across numerous variations of BERT models." - Evan Davis
🥈 Second Place ($2,000): Amino Acid Classification Breakthrough
Team: Jingyao Chen, Xirui Liu, and Zhaoyi You (Carnegie Mellon University)
This team tackled protein sequence classification using ProteinBERT, achieving identical F1 scores while using only 21.2% of original parameters - a massive compression achievement for biological sequence analysis.
"Fine-tuning updates only a small fraction of a large model's parameters, and PerforatedAI's dendrite mechanism made me immediately recognize its strong connection to efficient fine-tuning... This kind of result was unimaginable with traditional methods!" - Jingyao Chen
🥉 Third Place ($1,000): Optimizing Already-Optimized Models
Team: Rushi Chaudhari (Deloitte) and Rowan Morse (University of Pittsburgh)
Taking on the challenge of improving Google's already highly optimized MobileNet V3, this team achieved a 6% relative improvement on the base model and created a compressed variant with 35% fewer parameters while surpassing the original's accuracy.
"Seeing a model that small perform that well was honestly surprising. It really changes the way we think about deploying AI on limited hardware." - Rowan Morse
"For anyone building on a budget, or targeting mobile and embedded devices, this is a direction worth exploring." - Rushi Chaudhari
Real-World Impact: Cost and Performance Benefits
The hackathon results translate into significant real-world benefits:
38x cost reduction when running on Google Cloud T4 GPU instances
158x speed improvement on CPU-only hardware
1 edge CPU with Perforated Backpropagation as fast as 11 cloud GPUs with original model
Looking Forward
The hackathon's success demonstrates that Perforated Backpropagation offers a promising path forward for building more computationally efficient neural networks. The consistent improvements across diverse applications—from natural language processing to protein analysis to mobile vision—suggest that this technique could become an essential tool in the ML practitioner's toolkit.