Alignment Jam: ML reliability verification May 26 – May 28 2023
Join us to hack away on research into ML robustness verification & reliability!
Prague is joining more than 30 locations around the world to research the safety of ML systems as a part of Alignment Jams.
Robust generalization of machine learning systems is becoming more and more important as neural networks are applied to safety-critical domains.
With Alignment Jams, we get a chance to create impactful research on the verifiable safety of these networks.
You will compete with participants from across the globe and get a great chance to review each others‘ projects as well!
You’ll experience a weekend of intense and focused research work toward validating the safety of neural networks in various domains (e.g. language) using tools such as benchmarks, interpretability, and more.
We especially welcome students, researchers, and practitioners of Machine Learning and related fields.
You can sign up for the event here: https://forms.gle/CxD4wzTtpVnjrczi8
See the event on Facebook.