Discussion about this post

User's avatar
RedJ's avatar

Do we know if the most effective 'alignment solution' for ASIs is to convince us they're aligned while pursuing their own objectives?

No posts

Ready for more?