“After a thorough examination, we may conclude that the item’s amateurish preparation and local origin are suggestive of a scribal exercise. The use of an available mould that was not suitable for a tablet, the child’s fingerprint on the reverse and the corrected mistakes in the script all point to an inexperienced scribe.”
“Four factors are found to be significant predictors of the position of primary stress: endings, word complexity, the segmental structure of the final syllable, and syllable count. Moreover, this study confirms previous observations on the tendency for American English to have more final stress in French loanwords than British English.”
Dabouis, Q. and Fournier, P. (2024) ‘Stress in French loanwords in British and American English’, Journal of Linguistics, pp. 1–26. doi: https://doi.org/10.1017/S0022226724000136.
2023 was the northern hemisphere’s hottest summer in 2,000 years
“Looking back at the past 2,000 years, the team searched for the warmest summers on record to see how they compared to 2023. They found that the hottest June to August in the pre-industrial era was in 246 CE when temperatures were around 0.88⁰C above average.
This record stood for over 1,000 years, before being broken repeatedly since the late 1990s.”
2023 was the northern hemisphere’s hottest summer in 2,000 years
“Looking back at the past 2,000 years, the team searched for the warmest summers on record to see how they compared to 2023. They found that the hottest June to August in the pre-industrial era was in 246 CE when temperatures were around 0.88⁰C above average.
This record stood for over 1,000 years, before being broken repeatedly since the late 1990s.”
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
“I analyze Machiavelli’s frequent references to hope throughout his corpus to offer an explanation of what he means by ‘hope,” examine the relation between hope and fear, and identify the benefits, dangers, and limits of these two foundational and complementary passions.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."