Jason Morris
09/12/2024, 12:50 AMJason Morris
09/12/2024, 12:54 AMNilesh Trivedi
09/12/2024, 5:16 AMNilesh Trivedi
09/12/2024, 5:19 AMKonrad Hinsen
09/12/2024, 6:35 AMDenny Vrandečić
09/12/2024, 3:19 PMJason Morris
09/12/2024, 8:42 PMJasmine Otto
09/13/2024, 3:14 AMJasmine Otto
09/13/2024, 3:14 AMJasmine Otto
09/13/2024, 3:14 AMChris Knott
09/17/2024, 3:25 PMJason Morris
09/17/2024, 3:35 PMChris Knott
09/17/2024, 3:43 PMwe can't risk using an LLM at the moment because how it thinks is a black box. When it learns it develops "concepts" of some sort, but we can't know if these internally categories are, for example, illegally racist. We need to develop a shared intermediary language that both the LLM and technical users can read and understand. This will let us check and control the LLM's thinking
Chris Knott
09/17/2024, 3:52 PM