My personal projects offer a window into my passions and interests, showcasing the work I’ve pursued over the years. AI and mathematics have always been constants in my journey, fueling my curiosity and driving many of these endeavors. Personally and professionally, I have worked with a broad range of technologies, gaining diverse skills.
The field of AI has evolved significantly over the years. What once was a divide between AI and ML has now merged into a singular focus: neural networks. While AI researchers once explored ontologies, knowledge graphs, and various discovery techniques, machine learning relied on algorithms like decision trees, SVMs, K-means, and linear regression. Today, most of the field is centered around neural networks, transformer models, and recommendation systems — all largely developed in Python.
While practical, this narrow focus risks stalling further innovation. Transformer neural networks, though powerful, have inherent inefficiencies, especially when it comes to training. The energy consumption required for their operation is so vast that it could strain national power grids. This makes access to these systems limited to a select few with substantial financial backing, excluding many from the potential benefits of AI.
At industry meetups, I often hear enthusiastic proclamations about AI's potential, yet when I ask about key challenges like continuous learning, the answers are always the same: No. This lack of understanding of how models work underpins significant dangers. It's clear to me that AI must move beyond simply applying neural networks and start addressing their limitations, understanding their inner workings, and optimizing them.
One company I’m particularly excited about is Numenta, with its Thousand Brains Project. Their work challenges the current paradigm by proposing a more biologically accurate approach to intelligence. Unlike traditional models, Numenta's system aims to replicate the brain's processes for learning, including visual object recognition, in a more efficient and accurate manner. This approach could even be applied to language processing, where the structure of words and phrases in our brain mirrors how we perceive objects visually.
For me, the next step is building a prototype of a system that mimics how the brain might learn arithmetic (specifically addition) without relying on traditional functions or loops. This recursive approach to addition could help us better understand how neurons process complex tasks like simple math, and it aligns closely with the principles of Numenta's work.
The future of AI lies in moving beyond current limitations and exploring approaches that more closely mirror biological intelligence. Numenta’s Thousand Brains Theory provides a promising foundation, and I’m committed to building on these ideas to advance the field.
Like Jeff Hawkins, I do not fear the AI age, as long as we can build it the right way. The way they are being built is NOT SAFE, because nobody really understands how LLMs work. They can be dangerous, but I don't think they will turn into terminators any time soon.
Technologies: JavaScript, Node.js, MySQL, Python.