Initially, AI isn't going to involve human-like "sentience" to generally be an existential danger. Modern day AI systems are specified unique goals and use learning and intelligence to accomplish them. Philosopher Nick Bostrom argued that if one presents Pretty much any objective to a sufficiently strong AI, it could decide to damage humanity to ob