"After a thorough examination, we may conclude that the item’s amateurish preparation and local origin are suggestive of a scribal exercise. The use of an available mould that was not suitable for a tablet, the child’s fingerprint on the reverse and the corrected mistakes in the script all point to an inexperienced scribe."
“After a thorough examination, we may conclude that the item’s amateurish preparation and local origin are suggestive of a scribal exercise. The use of an available mould that was not suitable for a tablet, the child’s fingerprint on the reverse and the corrected mistakes in the script all point to an inexperienced scribe.”
“Four factors are found to be significant predictors of the position of primary stress: endings, word complexity, the segmental structure of the final syllable, and syllable count. Moreover, this study confirms previous observations on the tendency for American English to have more final stress in French loanwords than British English.”
Dabouis, Q. and Fournier, P. (2024) ‘Stress in French loanwords in British and American English’, Journal of Linguistics, pp. 1–26. doi: https://doi.org/10.1017/S0022226724000136.
“The Harvard team established the practical makings of the first quantum internet by entangling two quantum memory nodes separated by optical fiber link deployed over a roughly 22-mile loop through Cambridge, Somerville, Watertown, and Boston.”
From the manuscript to you: How Old Norse manuscripts are read and edited
"A case-study in how a page from an Old Norse manuscript (in this case the Codex Regius of the Poetic Edda) is edited for publication in a modern-day book. Manuscript images from the Árni Magnússon Institute at the University of Iceland (handrit.is)."
#Video length: Thirty minutes and fifteen seconds.
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."
Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633
“I analyze Machiavelli’s frequent references to hope throughout his corpus to offer an explanation of what he means by ‘hope,” examine the relation between hope and fear, and identify the benefits, dangers, and limits of these two foundational and complementary passions.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."