Hi all, Just checking in. Let me know when is a good time to talk next steps. If you have a date and time already for your July (or August) event, I would be happy to start looking around to see which of our speakers are available. Thanks, Tim On Thu, May 28, 2020 at 6:34 AM Sanchit Balchandani < balchandani.sanchit@gmail.com> wrote:
On Thu, May 28, 2020 at 12:58 AM Tim Bonnemann <planspark@gmail.com> wrote:
Hi,
This is a follow-up to my discussion with Kalyan Prasad regarding potential IBM speaker involvement at one of your upcoming (virtual) events.
Hi Tim,
Thanks a lot for reaching out to us. We'd be glad to host speakers from IBM on the topics you've mentioned below. We have a couple of events planned already in the month of June, so we can plan an event in the month of July if that works? Kalyan shall reach out to you for more details or quick discussion if needed.
Looking forward to this collaboration event.
Thanks, Sanchit
For background:
The IBM Data Science Community <https://community.ibm.com/community/user/datascience/> is a digital venue for more than 11,000 data scientists, AI developers, machine learning engineers and like-minded technical practitioners to learn, share, and engage. We support leading non-IBM meetups in key tech hubs around the world that have a strong focus on data science and AI and support them through speakers, sponsorships and (occasionally) venue space.
Our team currently has four topics on offer (please see abstracts below):
- AI Fairness - AI Explainability - AutoML - ML Ops
Other topics include:
- Geospatial Libs - Privacy-Preserving Machine Learning - Customizing JupyterLab Using Extensions - AI Pipelines Powered by Jupyter Notebooks
Talks are usually 30-45 minutes but can be customized to fit lightning talk slots or include more hands-on workshop type elements.
Hope you find some of these interesting. Let me know if you have any questions.
Thanks, Tim
*Removing Unfair Bias in Machine Learning* "Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline? In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.
AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
In this webinar you’ll learn:
- How to measure bias in your data sets & models - How to apply the fairness algorithms to reduce bias - How to apply a practical use case of bias measurement & mitigation in a data-driven medical care management scenario"
*Explainable Workflows using Python* This talk approaches the typical data science workflow with a focus on explainability. Simply put, it focuses on skills and tactics used to help data scientists articulate their findings to end-users, stake-holders, and other data scientists. From data ingestion, cleaning and feature selection, and ultimately model selection, explainability can be incorporated into a data scientists workflow. Using a combination of semi-automated and open source software, this talk walks you through an explainable workflow.
*AutoML, a Review* AutoML is a term that appears increasingly in tech industry articles and vendor product claims, and is also a hot topic within AI research in academia. Consider how nearly all of the public cloud vendors promote some form of AutoML service. The tech “unicorns” are developing AutoML services for their data platforms, many of which have been made open source. A flurry of smaller tech startups promise to “democratize” ML and relieve AI-related hiring pains for enterprise customers. Given all the buzz, what does “AutoML” mean? This meetup is a review of the current space and what we have to expect of AutoML.
*ML Ops, a Meetup* Consider how the software development life cycle (SLDC) is well-defined at this point: planning, creating, testing, deploying, maintaining – or some variant, depending on your software methodology. The gist remains consistent. Computer software runs “logic” in hardware, the test suites are repeatable, it’s a relatively deterministic process. Consider how, with machine learning, we’re working with probabilistic systems instead. Instead of writing code as instructions, we’re guiding these systems to learn from data. IBM has a mission to help bring machine learning capabilities to all, so we can all participate in the AI economy responsibly. Consequently, there are many different participants and stakeholders in this emerging field of ML Ops. This meetup will review the current state of what it takes to build successful pipelines in this probabilistic setting so that we can build a shared understanding of why we treat our models as living products and ask the right questions: Are they healthy? Are the representative? Are they biased? _______________________________________________ HydPy -- Hyderabad Python Users Group - India mailing list -- hydpy@python.org To unsubscribe send an email to hydpy-leave@python.org https://mail.python.org/mailman3/lists/hydpy.python.org/ Member address: balchandani.sanchit@gmail.com