• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    Being an absolutist is all fine and dandy (for example it makes philosophical debate much quicker) right up until you actually apply it to real life, at which point it becomes untenable.

    It’s like the problem with the first law of robotics (I know they were intentionally designed not to work, but they are a useful framework by which to think about things).

    A robot must not harm a human, or through inaction, allow a human to come to harm - so robot could not use violence to stop a terrorist attack because doing so would require it to harm a human, yet at the same time not stopping the terrorist attack would cause other humans to come to harm. There is no solution to the problem given the input limitations.

    Any intellectually honest approach to philosophy has to recognize that every situation is unique. What you need is a moral framework that allows you to adapt to a situation without having to resort to absolutism (like the laws of robotics). You might as well have the philosophy of just not doing anything ever, and you would have exactly the same result.

    Given that we may very soon actually have robots and AI this is a more important question than ever before and I really don’t think it’s been given any attention.