Context: A neural network can learn to organize the world it sees into concepts—just like we do
While acknowledging the fundamental research interest and significance here – I seriously wonder if this kind of thing is an instance of autonomous systems obtaining sufficient sophistication to access some hallowed or special human-level ontological taxonomy of the world or if, rather, this demonstrates the inverse in that human object-parsing and cognitive grammar is (also) just another method and algorithmically optimised solution to a most efficient, contingent solution to world-modelling and comprehension.
With every step forwards towards AGI, we also (ourselves) descend the supraliminal intelligence ladder from privileged uniqueness into a generalised problem-space of being just one among many “most efficient” or effective algorithmic compression solutions for integrated cognition and information or energy system self-propagation.