Neural Network Concept Ontology

As Neural Networks get progressively smarter, our own intellect loses some of its unique character and special nature.

Context: A neural network can learn to organize the world it sees into concepts—just like we do

While acknowledging the fundamental research interest and significance here – I seriously wonder if this kind of thing is an instance of autonomous systems obtaining sufficient sophistication to access some hallowed or special human-level ontological taxonomy of the world or if, rather, this demonstrates the inverse in that human object-parsing and cognitive grammar is (also) just another method and algorithmically optimised solution to a most efficient, contingent solution to world-modelling and comprehension.

With every step forwards towards AGI, we also (ourselves) descend the supraliminal intelligence ladder from privileged uniqueness into a generalised problem-space of being just one among many “most efficient” or effective algorithmic compression solutions for integrated cognition and information or energy system self-propagation.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.