It really is not much of a hop, skip and jump from all these emerging discoveries of proxy mechanisms for invoking rudimentary machine intelligence to suggest that we have quite profoundly placed the ontological cart before the horse here. Information-processing systems (of which all matter and dynamical relationships are instances) are not only capable of producing intelligence under optimal circumstances, they are quite literally predisposed and autonomously biased towards it.
Our key problem in recognising these autonomously self-propagating orientations towards representational compression and higher-dimensional abstraction are only really limited by our imaginations and the aggregate cognitive or cultural burdens of normative orthodoxy in science. The complex, non-linear natural systems in which we are immersed engage in tasks much more like intelligence and learning than we are collectively willing to admit.
These proxy machine intelligence systems are only really caricatures of the profoundly complex natural systems they inhabit. Human intelligence, while useful for the kinds of things humans find important, is a simpler instance of a much broader, more sophisticated natural bias towards representational self-abstraction.