Bibliografia Corso accelerato sull’AI
Abbott E.A. (2020) Flatlandia, Feltrinelli (Prima pubblicazione 1884)
Balassone S. (2023) Scusi il disturbo — Chiacchiere con personaggi che furono o che sono (podcast) Radio Immagina
Bommasani R. e altri 114 autori (2022) On the opportunities and risks of foundation models arxiv.org:2108.07258
Borji A. (2023) A Categorical Archive of ChatGPT Failures arXiv:2302.03494
Cameron R.W. (2024) Decoder-only transfomers: the workhorse of generative LLMs Deep (Learning) Foqus
Chomsky N. (2023) The False Promise of ChatGPT The New York Times
Kang C, Choi H. (2023) Impact of co-occurrence on factual knowledge of large language models arxiv.org:2310.08256
Kauf C., Chersoni E., Lenci A., Fedorenko E., Ivanova A.A. (2024) Comparing plausibility estimates in base and instruction-tuned large language models arXiv:2403.14859
Kurenkov A. (2020) A Brief History of Neural Nets and Deep Learning Skynet Today
Lenci A. (2008) Distributional semantics in linguistic and cognitive research Rivista di linguistica 20: 1-31
Lenci A. (2023) Understanding natural language understanding systems. A critical analysis arXiv:2303.04229
Mitchel M. (2022) L’intelligenza artificiale — Una guida per esseri umani pensanti, Einaudi, Ed. originale 2019
Morfiis A.P. (2024) Why reliable AI requires a paradigm shift Mostly Harmless Ideas
Morfiis A.P. (2024) Let’s build our own ChatGPT Mostly Harmless Ideas
Nielsn M. (2019) Neural networks and deep learning. Dispobile in http://neuralnetworksanddeeplearning.com/
Peterson A.J. (2024) AI and the problem of knowledge collapse arXiv:2404.03502
Ranieri M., Cuomo S. Biagini G. (2024) Scuola e intelligenza artificiale, Carocci
Raschka S. (2024) How good are the latest open LLMs? And is DPO better than PPO? Ahead of AI
Ravichandiran S. (2021) Getting started with BERT Packt Publishing
Vasvani W., Shazeer N., Parmar N., Uskzoreit J., Jones .L, Gomez A.N., Kaiser L., Polosukhin I. (2023) Attention is all you need arXiv: 1706.03762
Wendeler C., Veselovsky V, Monca G., WEst R. (2024) Do Llamas work in English? On the latent language model of multilinguam transformers arXiv:2402.10588