Articoli scientifici, riferimenti a newsletter di specialisti e altre risorse di cui mi sono servito per scrivere gli approfondimenti.
- AAAI Association for the Advancement of Artificial Intelligence (2025) Future of AI Research https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf
- Abbott E.A. (2020) Flatlandia, Feltrinelli (Prima pubblicazione 1884)
- Affirming the Scientific Consensus on Bias and Discrimination in AI (2025) https://www.aibiasconsensus.org/
- Ameisen E. et al (2025) Circuit Tracing: Revealing Computational Graphs in Language Models. Transformer Circuits Thread (Anthropic) https://transformer-circuits.pub/2025/attribution-graphs/methods.html
- Balassone S. (2023) Scusi il disturbo — Chiacchiere con personaggi che furono o che sono (podcast) Radio Immagina
- Biese P. (2025) https://substack.com/@pascalbiese
- Bommasani R. e altri 114 autori (2022) On the opportunities and risks of foundation models arxiv.org:2108.07258
- Borji A. (2023) A Categorical Archive of ChatGPT Failures https://arxiv.org/abs/2302.03494
- Cameron R.W. (2024) Decoder-only transfomers: the workhorse of generative LLMs Deep (Learning) Foqus
- Chen C. (2025) China built hundreds of AI data centers to catch the AI boom. Now many stand unused MIT Technology Review https://www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/
- Cho A. et al (2024) Transformer Explainer: Interactive Learning of Text-Generative Models https://arxiv.org/pdf/2408.04619
- Chomsky N., Roberts I. and Watumull J. (2023) The False Promise of ChatGPT The New York Times
- Dahl M. et al (2024) Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models https://arxiv.org/abs/2401.01301
- Dash S. (2025) https://medium.com/@shaileydash
- Deepseek-AI (2024) DeepSeek-V3 Technical Report https://arxiv.org/abs/2412.19437
- Deepseek-AI (2025) DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning https://arxiv.org/abs/2501.12948
- de Gregorio Ignacio (2025) https://medium.com/@ignacio.de.gregorio.noblejas
- Denis O. (2025) https://www.linkedin.com/in/denis-o-b61a379a/
- Dumas C. (2025) How do Llamas process multilingual text? A latent exploration through activation patching. Proc. 41st Int. Conf. on Machine Learning. https://openreview.net/forum?id=0ku2hIm4BS
- Ferri A. (2025) Claude Code saved us 97
- Floridi L. (2025) https://www.linkedin.com/in/luciano-floridi/recent-activity/all/
- Funk Jeffrey (2025) https://www.linkedin.com/in/dr-jeffrey-funk-a979435/recent-activity/all/
- Jimenez C.E. (2025) SWE-bench: Can Language Models Resolve Real-World GitHub Issues? https://arxiv.org/abs/2310.06770
- Kang C, Choi H. (2023) Impact of co-occurrence on factual knowledge of large language models https://arxiv.org/abs/2310.08256
- Kauf C., Chersoni E., Lenci A., Fedorenko E., Ivanova A.A. (2024) Comparing plausibility estimates in base and instruction-tuned large language models arXiv:2403.14859
- Kim Y. et al (2025) Medical Hallucination in Foundation Models and Their Impact on Healthcare https://arxiv.org/abs/2503.05777
- Kurenkov A. (2020) A Brief History of Neural Nets and Deep Learning Skynet Today
- Lenci A. (2008) Distributional semantics in linguistic and cognitive research Rivista di linguistica 20: 1-31 https://www.italian-journal-linguistics.com/app/uploads/2021/05/1_Lenci.pdf
- Lenci A. (2023) Understanding natural language understanding systems. A critical analysis https://arxiv.org/abs/2303.04229
- Lindsay J. (2025) On the Biology of a Large Language Model. Transformer Circuits Thread (Anthropic) https://transformer-circuits.pub/2025/attribution-graphs/biology.html
- Lockett W (2025) https://medium.com/@wlockett
- Mitchel M. (2022) L’intelligenza artificiale — Una guida per esseri umani pensanti, Einaudi, Ed. originale 2019
- Mitchel M. (2025) Artificial Intelligence learns to reason. Science 387, Issue 6740 DOI: 10.1126/science.adw5211
- Nezhurina, Marianna & Cipolina-Kun, Lucia & Cherti, Mehdi & Jitsev, Jenia. (2024). Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models. 10.48550/arXiv.2406.02061.
- Nielsn M. (2019) Neural networks and deep learning. Dispobile in http://neuralnetworksanddeeplearning.com/
- OpenAI (2025) OpenAI o3 and o4-mini Systen Card https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
- Peterson A.J. (2024) AI and the problem of knowledge collapsehttps://arxiv.org/abs/2404.03502
- Peterson A.J. (2025) AI and the problem of knowledge collapse. Springer https://link.springer.com/article/10.1007/s00146-024-02173-x
- Piad-Morffis A. (2024) Why reliable AI requires a paradigm shift Mostly Harmless Ideas
- Piad-Morffis A. (2024) Let’s build our own ChatGPT Mostly Harmless Ideas
- Piad-Morffis A. (2025) https://blog.apiad.net/s/mostly-harmless-ai
- Kheya A.G. et al (2024) The Pursuit of Fairness in Artificial Intelligence Models: A Survey https://arxiv.org/abs/2403.17333v1
- Knight W. (2025) Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models. Wired https://www.wired.com/story/ai-safety-institute-new-directive-america-first/
- Ranieri M., Cuomo S. Biagini G. (2024) Scuola e intelligenza artificiale, Carocci
- Raschka S. (2024) How good are the latest open LLMs? And is DPO better than PPO? Ahead of AI
- Ravichandiran S. (2021) Getting started with BERT Packt Publishing
- Shumailov I. et al (2024a) The curse of recursion: training on genereted data makes model forget https://arxiv.org/abs/2305.17493
- Shumailov I. et al (2024b) AI models collapse when trained on recursively generated data. Nature https://doi.org/10.1038/s41586-024-07566-y
- Sukhareva M. (2025) https://www.linkedin.com/in/msukhareva/
- Turness D. (2025) AI Distortion is new threat to trusted information. BBC https://www.bbc.co.uk/mediacentre/2025/articles/how-distortion-is-affecting-ai-assistants/
- Vasvani W., Shazeer N., Parmar N., Uskzoreit J., Jones .L, Gomez A.N., Kaiser L., Polosukhin I. (2017) Attention is all you need arXiv: 1706.03762 (ultima revisione 2023)
- Wendeler C., Veselovsky V, Monca G., WEst R. (2024) Do Llamas work in English? On the latent language model of multilinguam transformers arXiv:2402.10588
- Xu Y. (2024) A Survey on Multilingual Large language Models: Corpora, Alignment, Bias https://arxiv.org/abs/2404.00929
Un commento su “Bibliografia”