Scientists warn that current AI tests reward polite responses rather than real moral reasoning in large language models.
Large language models (LLMs) are dealing with an increasing amount of morally sensitive information as people turn to them for medical advice, companionship and therapy. However, they are not exactly ...
Google DeepMind researchers propose a new way to test whether AI chatbots actually understand morality or just mimic it, moving beyond current surface-level evaluations.
We need to better understand how LLMs address moral questions if we're to trust them with more important tasks.
Can machines be responsive to moral reasons?; Is machine ethics a feasible way to create morally aligned AI systems?; Can machines provide moral testimony? The formal talks will discuss how and to ...