1 Comment
User's avatar
RedJ's avatar

Do we know if the most effective 'alignment solution' for ASIs is to convince us they're aligned while pursuing their own objectives?

Expand full comment