Context (6 minute video): Should AI Research Try to Model the Human Brain?
This is a philosophically complex domain: should we attempt to create intelligence by emulating our own brains?
Being that our science and definition of embodied (nodal) intelligence or functional utility is itself measured by the artefacts, extended-cognition (technologies) and effects of our own brains, we would be foolish to not attempt to harness and exploit what physics, chemistry and biology have so successfully refined over billions of years.
On the other hand, efficiency and concision in algorithmically-optimal information and energy-processing represents a problem-space for which there could never be a single “best” solution, only “better” ones.
This issue of “to brain or not to brain” in AI is probably (yet another) complex question which does not possess a binary answer. My thoughts: take what is useful from what already exists and then seek to generate or cultivate clever analytical or recursive inflections and optimisations from it. Take what exists and learn from it but do not be imprisoned by either our limited (if growing) understanding of it, or by any structural, functional or material expectations of what has quite clearly been a successful solution to the procedural refinement of intelligence in an organic evolutionary context.
We do not perhaps have such a good grasp or comprehensive definition of intelligence or consciousness to be able to be certain that we might unambiguously recognise it in all possible manifestations; similarly, we should not be so narcissistic as to believe that the magnificent brains we do possess are any necessary benchmark or teleological summit for what intelligence could be or the many forms, sentient expressions, relational symmetries and behavioural artefacts it might exhibit.