fastaguy88 2 days ago

I am always puzzled when biologists make analogies between living systems (or sub-systems) and human-designed devices (even complex devices). Human designed devices have an essential property that biological systems do not -- they were designed. While radios may seem complex, they are built from a relatively small number of component types with very well understood behaviors and typically limited numbers of interactions. Biological systems have no such constraints -- they were not designed, there can be many different ways to do the same thing (in the same cell), there are at least thousands, if not hundreds of thousands of different components, many of which have lots and lots of interactions. Considering how messy the systems are, it is quite remarkable that genetics and biochemistry discovered the basis of heredity (or at least some of it), the genetic code, hundreds of signaling pathways, etc etc. But there is no missing language, because there was no design that used that language.

  • bonoboTP a day ago

    As a CS graduate, I've always had a related inferiority complex towards "real scientists" like biologists, chemists, physicists who confront the real mess of reality, instead of our cozy self-made thought-castles as computer people. We live in a world where everything is intelligible in principle, if we take the time to dig into how it works; they live in a world of true mystery. However, I take consolation in the fact that seemingly CS can touch the esoteric and mysterious regarding computability and computational complexity.

    • ipdashc a day ago

      Agreed on the whole inferiority complex thing. Though in addition, honestly, this is why I've always liked computing and electronics - exactly as you said, everything is intelligible in principle, everything is intentionally designed and runs on simple rules, it's just a matter of how much you dig into it.

      It's interesting how you'll often, in culture and media, see computers described as cold and inhuman. Like, I get it, of course. But in a way, there's also something very human about working with them, because every part of the stack was designed, meticulously and painstakingly (or maybe haphazardly) by other humans. The analog EE might worry mainly about the laws of physics, sure, and as you said, the CS theorist might fiddle with pure mathematics. But it seems the majority of working software engineers, and even a good chunk of electronics designers, will spend their time dealing mainly with things designed by other people.

      I get the impression it's not the same as like you said, the biologist that deals with the squishy, incomprehensible products of evolution, or the chemist and physicist that have to try to understand stuff like quantum mechanics. Those are the "cold" fields to me; ours is positively warm and cozy.

      • bonoboTP a day ago

        I think this contributes to collaboration problems across these types of experts. Experts of the squishy have to think in a holistic way that seems jumbled, irrational and arbitrary to the more structured, reductionist, modular, step-by-step input/output thinking of computer experts. In a squishy field you have to be constantly vigilant about unknown factors and have to integrate intuitions into your work. This can make the experience of collaborating on e.g. medicine+machine learning projects very frustrating for both sides. In my experience, biologists/medical people can't think in a layers-of-abstraction manner and kind of talk about every level at the same time. Even when they model something mathematically, they blend their sentences about the concrete use case with the abstract reasoning that is really independent of it and is just math.

        Of course at a high enough level, software development relies on judgments as well, and what architecture will be most maintainable or future-proof, when to go with which principle (DRY, YAGNI, KISS etc.), what approach to use etc. will be similarly squishy in a way.

        • fastaguy88 a day ago

          The idea that biologists have difficulty with abstractions probably reflects the “solidity” of modern molecular biology. Biologists were doing genetics, working out the rules of heredity, mapping genes onto chromosomes, understanding meiotic cross-over, for decades before the physical nature of the gene was understood. Likewise immunology is full of terms and factors that today have a biochemical explanation, but again, for years only existed as abstractions. Biology is much easier to understand today, now that we have molecules for almost everything. But biologists did decades of very insightful and productive work that was completely based on abstractions.

          • bonoboTP a day ago

            It's a bit like dropping a kid into programming without foundational courses on anything. They'll learn from the "outside inward". As I did, before doing a CS degree. 12 year old me knew Excel, so I understood function calls with parentheses are a thing, but didn't have any idea what programming really means, but built HTML+JavaScript pages through trial and error. I didn't know what compilation is, didn't understand that lines in the code are executed sequentially or in parallel. I could configure port redirects in the router to set up multiplayer games without understanding what "protocol" or "port" meant.

            Then at university I learned it all from the ground up, clearing many misconceptions. But my misconceptions also were helpful. When we learned about OpenMP, I remembered I had thought that "for" loops would run in parallel. And indeed it turns out it was possible to run them in parallel. Or I had misconceptions about pass-by-value and pass-by-reference but at least had a prepared mental framework for this when we formally learned about it.

            Biologists arrived at the scene similarly, without manuals or foundational courses handed over by God. So it had to start with this "competent ignorance" at first. You don't quite know what you're doing but it works. And then you figure out the building blocks.

      • fwip a day ago

        There's some interesting parallels here.

        If you go deep enough into electronics, you'll run into physics again. The engineers working at the chip fabs (and chip design) work very hard to shield us mortals from the messy details - the idealized transistors and gates we work with in the digital world are a useful abstraction. (I hope never to need to learn about quantum tunneling!)

        In the same way, if you go deep enough into software design (whether "user-facing" or for other developers, you'll run into the messy vagaries of humans and our wetbrains.

        Whether you're dealing with the subcultural expectations of your audience for a drop-down vs radio-button, or writing a tutorial on how to use your library, or thinking about what features your fancy new programming language needs, we rely on the abstractions and rules-of-thumb that we've learned. But those rules come from deep places, using results from neurology, sociology, psychology, etc!

        Everything is deep, in every direction and all the way down. :)

  • minifridge 2 days ago

    Biological systems have also designs that emerged through evolution. Although the complexity may seem at different scales, the main difference is the measurements you can do. Both biological and electrical circuits are dynamic systems that have designs that gives them emergent functional properties.

    As the article describes imagine having the list of radio components instead only instead of their topology (wiring diagram). The problem of figuring out how a radio works with this information, if youbknow little about their design, becomes quite similar with how figuring out how a biological system works.

    The absence of a design diagram and our inability to measure components at the molecular level without disturbing the state of a system is the main reason bilogical systems are so challenging to understand.

    • fastaguy88 2 days ago

      I am a bit more comfortable saying "Biological systems also have 'solutions' that emerged through evolution." That is certainly true. But unlike designers, evolution is perfectly happy to re-invent the wheel (even if it is a less functional wheel). So different, evolutionarily independent, processes may provide the same solution, and of course solutions are constantly re-used to provide slightly different solutions. So I'm not sure that "complexity" is the hardest problem, though it certainly doesn't make things easy. The diversity of solutions for the same problem makes generalization/abstraction even more difficult.

  • doe_eyes 2 days ago

    Sure, but... how often do we object when fellow techies make analogies between just about anything and their field of expertise? We feel that our knowledge makes us quite qualified to explain the economy, to chime in on microbiology, and so on.

    It's really pretty universal.

  • superfist a day ago

    As far as I remember, this paper draws an analogy not to system structures, but to the method of approach in their analysis. If a purely statistical approach to a designed system is presented as flawed, it is even more problematic when applied to more complex living systems. So this paper makes good point here.

  • UniverseHacker a day ago

    The two are not entirely at odds- modern bioengineering and synthetic biology are getting pretty good at designing living systems that actually work as designed, e.g. a cell factory that produces a useful molecule.

    Modularity and simplicity do evolve naturally when selection pressures make those properties beneficial- and in such cases engineering is then possible, and engineering analogies make sense.

    A few examples that come to mind: DNA, modular assembly line proteins, etc. In such cases there seems to have been a selection pressure for rapid reconfiguration, which favors composable modular systems where one small change can lead to a new functional system - often in a way that follows simple predictable rules. In some cases the systems are not messy at all- and rival the most carefully planned out human designed systems.

  • kuhewa 2 days ago

    The right column on page 181 addresses these arguments

  • nicman23 10 hours ago

    one is trying to understand things made by people the other is basically trying to understand things made by adversarial networks for some billions years

    oh and there were uncountable networks, at the same time, that were just lucky or not

  • aleph_minus_one a day ago

    > Human designed devices have an essential property that biological systems do not -- they were designed.

    ... assuming you don't believe in creationism or some branches of pre-astronautics (aliens used genetic engineering to create/modify a lot of life on earth). ;-)

  • wyager a day ago

    It doesn't matter if something was evolved or designed; if there is a penalty on increased complexity, the systems will converge towards low-kolmogorov-complexity implementations. The information theoretic complexity of the human genome is low enough that we have a structural argument that it is scrutable with a reasonable amount of effort.

Micand 2 days ago

As was flagged in the other discussions, see Could a neuroscientist understand a microprocessor: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

Or a recap from The Atlantic: https://www.theatlantic.com/science/archive/2016/06/can-neur...

I saw a wonderful recording of a talk the authors gave a few years ago (which I regrettably can't find now), and it was amongst the most eye-opening talks I've ever seen.

  • bee_rider a day ago

    It is sort of interesting… I think we will all admit that brains are more complex than computer chips. But I still wonder if there’s some general underlying principle that makes brains more reasonable to study in this fashion than computer chips.

    For example, they talk about destroying one transistor, observing that the thing can’t play Donkey-Kong anymore, and concluding that that is the DK transistor.

    But computer chips are designed to have incredibly long chains of dependencies where each specific transistor does exactly the right thing, every time.

    For neurons, it isn’t so specific, right? They all might fire, depending on the timing, and whether or not they are… I don’t know biology, charged or whatever. The whole system works under the assumption that many complements will mis-fire or be duds are any moment.

    It seems (to me at least) more reasonable, to come to the conclusion that the DK neuron is really a DK neuron, if removing it causes an un-recoverable DK related failure… because the whole system is based around handling failures! It is somehow special that something can break it.

    • jerf a day ago

      "For neurons, it isn’t so specific, right? They all might fire, depending on the timing, and whether or not they are… I don’t know biology, charged or whatever."

      So we assume. And with some good reason, including both the studies done over the decades, and the fact that we've built systems based on this architecture that match this concept.

      However, it is important to observe that this characteristic is still in the metaphorical spotlight. It is a thing that we can discover. If, in fact, there was One Blessed Neuron that contained the most vital aspect of some critical concept in someone's brain, we currently have zero capability to discover it, zero ability to characterize it, and effectively zero ability to manipulate that neuron directly in some experimental fashion once identified. Therefore, we should be fairly suspicious of the claim that we've eliminated this as even being a possibility.

      I expect it is unlikely that there is a such thing as the One Blessed Neuron, even so, but there are a large number of other hypothetical organizations that could exist beyond "an amorphous neural net with nothing really located anywhere", and we have good evidence for that as well; the "regions" of the brain, the fact we can visibly see physically different organizations of neurons in certain regions and associate them with certain tasks. I would not even dream of trying to guarantee that there is no structure lying in between the gross differences we already know about and the hypothetical undifferentiated neural mass, the structures in those complexity voids between what we can currently see. We again have a lot of inductive reasons to believe that there is likely more structure there we do not even have a clue exists, on the grounds that every time we get a closer look at something for the first time ever, it is rarely only and exactly what we expected. It's such a notable outcome that it gets called out specifically when it happens, precisely because it's rare.

    • rcxdude a day ago

      Yes, there's more of a degree of redundancy in brains than microprocessors, but the 'lesion' approach tends to involve affecting more than just one neuron. The main point of that section is that you've got to be careful about what you define as a 'function'. In the processor case, if you defined function in terms of a register or instruction working properly, then this approach could actually give you a pretty good map. But if you've defined it as whether a particular game runs, it doesn't give you a good map at all, because most games exercise most functions of the microprocessor, and so which small parts aren't important to that particular game is only incidental. With brains the main thing is the underlying functions are not really known, so you need to approach with care.

      • bonoboTP a day ago

        I used to have the impression that the brain is more "mixed up" than it really is. It's quite hard to get the right impression as a layman. Experts push too hard towards the extreme that makes us think that everything is involved in everything and it's all thoroughly uniformly mixed, advising against believing those old naive brain charts etc. But really, certain functions are remarkably well localized. This kind of overcorrection is quite common if one often reads HN-like "well actually"-content, which is supposed to supplement a common perception with a small caveat. Beware of internalizing just the caveat and forgetting the main thrust.

  • stochastician a day ago

    Author of that paper here -- What an incredibly nice comment to wake up to at 3am, thank you! This paper made me quit neuro and switch to systems that are easier to model (small molecules) which it turns out are still annoyingly difficult!

dekhn a day ago

Biologists have investigated a number of systems which, given the nature of evolution, bear eerie resemblance to the systems engineers have designed to solve similar problems.

Here's an example: https://en.wikipedia.org/wiki/Flagellum#Motor Over the decades people gradually uncovered more and more of the structure and ultimately recognized that the flagellum motor has components that correspond to engineered motors, with rotors, stators, energy sources, rings, bearings, etc. Amusingly, after all the effort, we recognized that the motor greatly resembles ATP synthase- one of the most core energy management proteins we have (which carries out a totally unrelated function).

I imagine that any rationally designed human engineering product could be understood by a biophysicist with no real technical knowledge beyond basic electrical and mechanical engineering.

See also "The Salvation of Doug", which gives an absolutely hilarious analogy between using scientific methods to understand biology, and reverse-engineering a car factory:

"""Emboldened by his successes, the next morning the geneticist tied the hands of an individual dressed in a suit and carrying a briefcase in one hand and a laser pointer in the other (he was a vice president). That evening the geneticist, and Doug (although he would not openly admit it), anxiously waited to see the effect on the cars. They speculated that the effect might be so great as to prevent the production of the cars entirely. To their surprise, however, that afternoon the cars rolled off the assembly line with no discernible effect."""

sundarurfriend a day ago

> changing the colors would have only attenuating effects (the music is still playing but a trained ear of some can discern some distortion)

What is this referring to? For most pieces inside the radio, the colors would be just superficial and not have any effect. There are some components (capacitors, resistors) where the colors have meaning, and changing them could lead to overcurrent and fires/damage or insufficient conduction and things not really working.

I'm finding it hard to imagine what parts the author is thinking of where changing the colors would have only attenuating effects, where it would take a trained ear to hear the distortion. In the set of all outcomes from changing just colors, most would be either "nothing happened" or "radio's operation is clearly messed up". The author's scenario seems like one of the less likely ones.

  • creer a day ago

    In that section, they are not yet altering the radios. They are comparing the ones from a pile purchased. So this is about the different radios having components from different manufacturers, different tech, different quality. - Leading to (usually) minor sound differences.

pvaldes 2 days ago

If is unemployed, you can bet that can and will do it.

svilen_dobrev 2 days ago

the two schematical views reminded me of a) the dualisms in Pirsig, and b) also of another similar one: two views of what-a-cat-consists-of - one according to the granny owner, and another according to the surgeon - but cannot remember which book was that from. Anyone?

anthk a day ago

Biology makes me thing if the universe/consciousness itself it's a product on information parsing itself, making that 'essence' or 'soul' (on how they defined it the Ancients) a truth beyond matter. No, not something religious, but more related to computation. But for this we should accept that 'data' it's an intrinsic part of Physics, not just matter.

I would like an intro to Thermodynamics, but IDK where to start.

ngcc_hk 2 days ago

The evolution approach is more like throwing spaghetti on the wall not design. The key event seems to “The Cambrian Explosion of Animal Life” which nature “ Experiment” body parts etc and see what is the best.

If a real biologist do it it will be like the evolution line approach which some thinks lead wu han lab created the covid funded by USA and approved by … That is debatable but the approach is not. Unless we can do evolve radio …

One must note bird seems to evolve use quantum mechanics to direct their migration. Hence that is not unthinkable.

The question is whether have radio communication has an evolution advantage. Ignore the issue of not science (as not refutable), that is the real question.

  • wizzwizz4 a day ago

    Birds don't use quantum mechanics. Birds use reality. Quantum mechanics is a model of reality.