

that solver would be tool use though… i’m talking about just the “thinking” LLMs. it’s fascinating to read the thinking block, because it breaks the problem down into basic chunks, solves the basic chunks (which it would have been in its training data, so easy), and solves them with multiple methods and then compares to check itself
that as the case may be, sending signals is still good. you don’t have to continue for very long, but a flood of support after making a moral decision will make it more likely that they, and others will make similar decisions in the future
the worst thing would be for google for example to see the fallout from this and think “well we don’t want to be them! better start building autonomous weapons”