(They are dropping the radar.)
A thought that comes to mind is that of an ML system that having been trained with some static n number of streaming (sensory) data points/observations, having had the system narrow focus on less data might brute-force an exponential increase the utility and efficiency of that same model with the diminished input vectors it still has. Brains certainly do this with sensory compensation/substitution and randomly dropping nodes in a neural network (training) epoch can do something that vaguely resembles this – invoking resiliency and aspects of useful generalisation.
Doubling-down on one sensory modality raises the profile, value and significance of that which remains but also amplifies uncertainty and inversely hard-codes a blind-spot. Redundancy seems to be a problem, regarding volatile weather, light and driving conditions. Personal opinion: you would rather have it and not need it than need it and not have it.