Ajustar o tamanho da tela
Resize out
Resize in
Resize Repor
Done Feito
Bostrom: Reducing existential risk

Bostrom: Reducing existential risk

Visualizado 568 vezes

Validação Humana

8.0
Obrigado, o seu voto foi gravado e a classificação do jogo será atualizada em breve.
Sim
Não
Descrição do vídeo:

One kind of existential risk arise out of human modification and human enhancement. Especially if we change our emotions and values or various psychological attributes that might fundamentally shape our values and preferences. It's the kind of change that once you've made it you might no longer want to go back. Even if by current lights you would have changed for the worse. You would be able to go back. Presumably you would have the same technology, you could stop using it, but you might no longer want it. It's the same that you have some idealistic youth, perhaps, and then they go to law school, committed to the idea that they will then become like a civil rights lawyer or something, but they get their values slowly changed and then they end up being a lawyer for some big corporation to sue their competitive corporation. Once that value change has occurred even though they could still become an idealist if they choose to, they could just go and work for Greenpeace or whatever, they won't do it. So similarly things that can change our values, it's very dangerous to fool around with that.

So there are also ways, of course, in which some human enhancements could mitigate existential risk, for example, cognitive enhancement. It might be that we just need to be smarter to figure out how to not destroy ourselves the first time we create, say, machine super intelligence. That you might need, you might need enough intelligence to be able to foresee the consequences of some of the actions we're taking. So depending on what kind of enhancement you're talking about, it might either increase or decrease existential risk. I mean, ultimately... So here one question also is whether your values have focused on currently existing people or whether you're, at the other extreme, neutral between all future generations and bringing happy people into existent counts as much as making people already exist happy. So if you only care about existing people then you might want to be quite risk seeking in the sense that currently we're all dying. So unless something radical changes, we're all going to be dead within a hundred years or so, and most of us much sooner. When in a desperate situation like that you want to try, even if it's a long shot, it's the only change you have of maybe achieving a cosmic scale lifespan. You know, something really radical would have to change.
If you are temporarily neutral and you care as much about bringing new happy people in to existence as you do about making currently existing people happy, then your priority will instead be to do whatever increases the chances that ultimately we will develop, you know, a galactic civilisation that's happy. Whether it takes a hundred years or fifty years or fifty thousand years is completely irrelevant, because once it's there it can last for billions of years. So you would then do whatever it takes to reduce existential risk as much as possible. And whether that means causing famines in Africa or not doing that, whatever it would be, just fading in significance compared to this call of reducing existential risk. So you get very different, very different sort of, priorities depending on this basic question in value theory.

Adicionado em 01 Apr 2017
Comentários