February 6-7, 2023

Jacksonville Beach, FL
National Science Foundation and the University of Florida

Speakers and Presentations

Keynote Speaker

  • Keith Sonderling
    Dr. Keith Sonderling

    “The Promise and Perils of Artificial Intelligence in Employment Decision-Making” (video coming soon)

    Abstract: Commissioner Sonderling will address the implications of Artificial Intelligence and Machine Learning in employment decision-making and human resources. Employers are using AI to make employment decisions at every stage of the job life cycle, from hiring to promotion to firing. AI-driven technologies have the potential to make the workplace more open, fair, and inclusive by eliminating unlawful discrimination from employment decisions. However, AI can also amplify workplace bias if it is poorly designed or carelessly deployed. Drawing on real-world examples, this presentation will provide an overview of the promise and perils of using AI to make employment decisions. Commissioner Sonderling will provide an overview of the legal framework governing AI in the United States and will address the ways that U.S. civil rights laws protect employees from discrimination by algorithms. Finally, he will suggest ways that employers can reap the benefits of AI while respecting the rights of workers. This subject is especially timely as COVID has accelerated the rate at which employers are adopting technology to do work once performed by human resources professionals.


  • Ahmed Abbasi
    Dr. Ahmed Abbasi

    “Should Fairness be a Metric or a Model? A Case for Using Model-based Frameworks to Assess Bias in Machine Learning” (video coming soon)

    Presentation Abstract: Fairness measurement is crucial for assessing algorithmic bias in various types of machine learning (ML) models, including ones used for search relevance, recommendation, personalization, talent analytics, and natural language processing. However, the fairness measurement paradigm is currently dominated by fairness metrics that examine disparities in allocation and/or prediction error as univariate key performance indicators (KPIs) for a protected attribute or group. Although, important and effective in assessing ML bias in contexts such as recidivism and AI-based recruitment, I will argue that existing metrics don’t work well in many real-world applications of ML characterized by imperfect models applied to an array of instances encompassing a multivariate mixture of protected attributes. Using, as a case study, a specific example of pre-trained language models guiding a user health intervention, I will illustrate how basic regression models can parsimoniously measure multiple protected class attributes (e.g., different demographics), consider interaction effects between these attributes (i.e., intersectional bias), and better predict how the bias in ML models will play out for downstream decisions.

    Bio: Dr. Ahmed Abbasi is the Giovanini Endowed Chair Professor in the Department of IT, Analytics, and Operations in the Mendoza College of Business at the University of Notre Dame. He serves as Co-Director of the Human-centered Analytics Lab (HAL) and directs the PhD program in Analytics. Prior to joining Notre Dame, he was an Endowed Chair, Associate Dean, and Director of the Center for Business Analytics at the University of Virginia. Ahmed received his Ph.D. from the Artificial Intelligence (AI) Lab at the University of Arizona.

    He has over twenty years of experience pertaining to machine learning and predictive analytics, with applications in health, online communities, and digital user experience. Ahmed’s research has been funded by over a dozen grants from the National Science Foundation and industry partners such as AWS, Microsoft, eBay, and Oracle. He has also received the IEEE Technical Achievement Award, INFORMS Design Science Award, and IBM Faculty Award for his work on human-centered AI.

    Ahmed has published over 100 articles in top journals and conferences, won best paper awards at AIS, MISQ, ISR, and WITS, and was a finalist for the AMA’s Hunt/Maynard Award. His work has been featured in various media outlets including the Wall Street Journal, Harvard Business Review, the Associated Press, WIRED, CBS, and Fox. Ahmed serves as a senior or associate editor for various INFORMS, IEEE, and ACM journals, and was a past Chair of the INFORMS College on AI. He has also served as co-founder or advisory board member for multiple predictive analytics companies.

  • Rumman Chowdhury
    Dr. Rumman Chowdhury

    “The Paradox of Transparency” (video coming soon)

    Presentation Abstract: Increasingly, regulation regarding AI Governance mandates increased transparency and accountability, including algorithmic audits and third party assessments. However, there has been little conversation about how algorithmic accountability can be at odds with increased privacy and security requirements. In this talk, I will discuss my experiences introducing algorithmic auditing at scale, both as a third party consultant and founder and at Twitter, and the challenges we face. I’ll also discuss potential emerging technologies that can help address this issue.

    Bio: Dr. Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She is a pioneer in the field of applied algorithmic ethics, creating cutting-edge socio-technical solutions for ethical, explainable and transparent AI. Dr. Chowdhury currently runs Parity Consulting, Parity Responsible Innovation Fund, and is a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.

    Previously, Dr. Chowdhury was the Director of META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform. Prior to Twitter, she was CEO and founder of Parity, an enterprise algorithmic audit platform company. She formerly served as Global Lead for Responsible AI at Accenture Applied Intelligence.

  • Diane Coyle
    Dr. Diane Coyle

    “When should algorithms take decisions that matter?”

    Presentation Abstract: It is widely thought that machine learning systems hold great promise for doing a better job than humans in some areas of decision-making yet there are also some obvious pitfalls, including biased data and lack of alignment. This debate involves some generally unstated assumptions. It presumes that what ‘better’ means is agreed and codifiable; and it assumes that all relevant information is captured, perhaps imperfectly, as ‘data’. I interrogate these assumptions and consider their implications for AI governance.

    Bio: Diane Coyle is the Bennett Professor of Public Policy at the University of Cambridge. She co-directs the Bennett Institute where she heads research under the themes of progress and productivity. Diane’s latest book is Cogs and Monsters: What Economics Is and What It Should Be. Her research focuses on the digital economy and digital policy. Diane is also a Director of the Productivity Institute, a Fellow of the Office for National Statistics, and an adviser to the Competition and Markets Authority. She has served in a number of public service roles including as Vice Chair of the BBC Trust, member of the Competition Commission, and of the Natural Capital Committee. Diane was previously Professor of Economics at the University of Manchester and was awarded a CBE for her contribution to the public understanding of economics in the 2018 New Year Honours.

    ‘Explaining’ machine learning reveals policy challenges (with Adrian Weller), Science, 26 June 2020, Vol. 368, Issue 6498, pp. 1433-1434.

    Socializing Data, Daedalus, 151 (2) Spring 2022 pp 348-359

    The Value of Data: Policy Implications, with Stephanie Diepeveen, Lawrence Kay, Jeni Tennison, Julia Wdowin, Bennett Institute, February 2020

    CV | Twitter | Mastodon

  • Chris Harle
    Dr. Chris Harle

    International Perspectives: Panel Discussion (video coming soon)

    Bio: Dr. Chris Harle is a Professor and Interim Chair in the Department of Health Policy and Management at the Indiana University (IU) Richard M. Fairbanks School of Public Health. He is also a scientist in the Regenstrief Institute’s Center for Biomedical Informatics and an Associate Faculty in the IU Kelley School of Business. From 2020 to 2022, Dr. Harle was the Chief Research Information Officer for the University of Florida (UF) Health. He also played a lead role in the recruitment of over 25 new faculty members as part of UF Health’s Artificial Intelligence (AI) initiative. Dr. Harle’s work focuses on bringing together diverse teams from different academic disciplines, healthcare organizations, and information technology services to design, implement, and evaluate the impact of health information systems. He is passionate about developing individuals and teams that continually learn while making a difference in healthcare and public health. Dr. Harle holds an MS in Decision and Information Sciences from the University of Florida’s Warrington College of Business Administration and a PhD in information systems and management from Carnegie Mellon University’s H. John Heinz III College. Subsequently, he completed a National Institutes of Health-funded career development award in Clinical and Translational Science.

  • Jens Kleesiek
    Dr. Jens Kleesiek

    International Perspectives: Panel Discussion (video coming soon)

    Bio: Dr. Jens Kleesiek is a full professor at the Institute for Artificial Intelligence in Medicine (IKIM) of the University Medical Center Essen, Germany, where he heads the department of Medical Machine Learning. He is also Associate Director for Data and IT at the West German Cancer Center.

    Jens studied medicine in Heidelberg and bioinformatics in Hamburg, Germany. He received his MD in 2005 and his Ph.D. in computer science in 2012. In 2020 he obtained his board certification in radiology, and in 2021 his specialization in medical informatics.

    His research focuses on methods for self- and weakly-supervised learning to detect clinically relevant patterns in medical data and on the integration of multimodal information for improving decision-making at the point of care. To facilitate machine learning in clinical settings, he and his team utilize modern cloud technologies and follow infrastructure as code paradigms.

  • Ramayya Krishnan
    Dr. Ramayya Krishnan

    “Codification of tradeoffs by Organizations in AI Governance”

    Presentation Abstract: AI is being increasingly deployed to support consequential decisions. While risk management frameworks such as the AI RMF have been released by NIST, they focus more on process and principles and do not specify how to operationalize the responsible AI principles into practice. Drawing on examples from health care and public sector settings, I will motivate a framework for thinking about characteristics of AI models/interventions (e.g., their accuracy, explainability, bias) and the impact it has on outcomes of the organizational system. In health care, the quintuple aim is used to express and measure system outcomes. Given system outcome measures and metrics to measure the characteristics of AI models, I will provide an initial set of ideas for assessing, codifying, and exploring trade-offs between model characteristics (e.g., accuracy vs. bias) and its impact on system outcomes.

    Bio: Ramayya Krishnan is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at the H. John Heinz III College and the Department of Engineering and Public Policy at Carnegie Mellon University. A faculty member at CMU since 1988, Krishnan was appointed as the Dean in 2009 of the Heinz College. Krishnan was educated at the Indian Institute of Technology and the University of Texas at Austin. He has a Bachelor’s degree in mechanical engineering, a Master’s degree in industrial engineering and operations research, and a PhD in Management Science and Information Systems. Krishnan’s research interests focus on consumer and social behavior in digitally instrumented environments. His work has addressed technical, policy and business problems that arise in these contexts and he has published extensively on these topics. He has founded multiple research centers at CMU and is the faculty director of the Block Center for Technology and Society. He advises governments and policy making organizations on technology policy and the deployment of data driven policy making. He is an advisor to the President of the Asian Development Bank and is a member of the Geotech Commission of the Atlantic Council. He is an American Association for the Advancement of Science – AAAS Fellow (section T), an INFORMS Fellow, an elected member of the National Academy of Public Administration and a distinguished alumnus of both the Indian institute of Technology and the University of Texas at Austin. He served in 2019 as the 25th President of INFORMS, the Global Operations Research and Analytics Society. He was appointed to the National AI Advisory Committee to the President and the AI Initiatives office in 2022.

  • Keith Leavitt
    Dr. Keith Leavitt

    “Ability, Benevolence, Integrity…Reflexivity? A Model of Employee Trust in Manager-In-The-Loop Performance Management Systems.” (video coming soon)

    Presentation Abstract: Organizations increasingly rely on machine learning algorithms to support their performance management practices, allowing them to more accurately monitor, evaluate, rank, and reward employee performance in real-time. However, such technologies threaten to erode employee trust. While much is known about when people are likely to trust algorithms versus humans, trust research to date overlooks the fact that algorithms often augment (rather than replace) human decision-makers; this suggests that the appropriate focus of trust may be at the configuration (i.e., manager plus algorithm) level. Drawing from conceptualizations of shared agency between automating technologies and human users, we present a model of employee trust in manager-in-the-loop (MIL) performance management systems, within which managers serve dual roles of translator and augmenter of often inscrutable algorithmic technologies. We theorize that creating subordinate trust in MIL systems requires supervising managers to both work with and work around automating technologies to bolster their own perceived trustworthiness, with the goal of building subordinate trust by creating system reflexivity. Our process model offers scholars a framework for examining trust within 21st century organizations, and offers practicing managers a set of considerations for navigating algorithmic augmentation for effective performance management while maintaining trust among their employees.

    Bio: Dr. Keith Leavitt’s research interests include behavioral ethics, identity and situated judgment, and research methods/epistemology. Specifically, much of his research focuses on how social expectations and constraints inform or inhibit ethical behavior in the workplace. His research has been published in the Academy of Management Journal, the Academy of Management Review, the Journal of Applied Psychology, Personnel Psychology, Organizational Behavior and Human Decision Processes (OBHDP), the Journal of Management, the Journal of Personality and Social Psychology, Organizational Research Methods, the Journal of Organizational Behavior, the Journal of Vocational Behavior, and the Journal of Business Ethics. He currently serves as an Associate Editor at OBHDP and serves on the editorial boards of both AMJ and JAP.

    Keith’s work has been featured in over 200 news and media outlets including the New York Times, Forbes, NBC News, Fast Company, Inc. Magazine, Vice News, Wall Street Journal Radio, The Huffington Post, Time Magazine, and prominently on the front of his mother’s refrigerator.

    In his spare time, he enjoys mountain biking, fly fishing, skiing, the occasional existential crisis, and trying to ignore rapidly-accumulating indicators of middle-age.

  • Hila Lifshitz-Assaf
    Dr. Hila Lifshitz-Assaf

    “To engage or not to engage with AI for critical decisions? That is the question!” (video coming soon)

    Presentation Abstract: Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major US hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three), did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices – practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call un-engaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.

    Bio: Hila Lifshitz-Assaf is a Professor of Management at Warwick Business School and a visiting faculty at Harvard University, at the Lab for Innovation Science.

    Professor Lifshitz-Assaf’s research focuses on developing an in-depth empirical and theoretical understanding of the micro-foundations of scientific and technological innovation and knowledge creation processes in the digital age. She explores how the ability to innovate is being transformed, as well as the challenges and opportunities the transformation means for R&D organizations, professionals and their work. She conducted an in-depth 3-year longitudinal field study of NASA’s experimentation with open innovation online platforms and communities, resulting in a scientific breakthrough. This study received the best dissertation Grigor McClelland Award at the European Group for Organizational Studies (EGOS) 2015, Best Administrative Science Quarterly (ASQ) paper based on dissertation (2018) and Best published paper elected by organizational communication and information systems division of Academy of Management (2018).

    She investigates new forms of organizing for the production of scientific and technological innovation such as crowdsourcing, open source, open online innovation communities, Wikipedia, hackathons, makeathons, etc. Her work received the prestigious INSPIRE grant from the National Science Foundation and has been presented and taught at a variety of institutions including MIT, Harvard, Stanford, INSEAD, Wharton, London Business School, Bocconi, IESE, UCL, UT Austin, Columbia and Carnegie Mellon. Her work was recognized to have a strong impact on the industry; She received the Industry Studies Association Frank Giarrantani Rising Star award and the Industry Research Institute grant for research on R&D.

    Prior to academia, Professor Lifshitz-Assaf worked as a strategy consultant for seven years, specializing in growth and innovation strategy in telecommunications, consumer goods and finance.

    Professor Lifshitz-Assaf earned a doctorate from Harvard Business School, an MBA from Tel Aviv University, magna cum laude, a BA in Management and an LLB in Law from Tel Aviv University, Israel, both magna cum laude.

  • Genevieve Melton-Meaux
    Dr. Genevieve Melton-Meaux

    “Clinical AI Governance in Patient Care” (video coming soon)

    Presentation Abstract: Artificial intelligence (AI) holds significant promise for transforming patient care. Today, AI is increasingly being adopted and utilized clinically – particularly in the form of clinical decision support. Because of the critical nature of healthcare delivery decisions and the potential patient safety implications of clinical AI, the need for clinical AI governance is essential. However, regulatory guidance, risk frameworks, governance models, and local exemplars are only emerging particularly when considering the full lifecycle of clinical AI and the dynamic and complex nature of healthcare delivery. This talk will provide an overview of current state, boots on the ground considerations, and future perspectives on AI clinical governance.

    Bio: Dr. Melton-Meaux trained in computer science and mathematics at Washington University before completing medical school and surgical residency at Johns Hopkins University, colon and rectal surgery fellowship at Cleveland Clinical, biomedical informatic National Library of Medicine-sponsored postdoctoral fellowship at Columbia University, and a PhD in health informatics at University of Minnesota. She is currently Professor of Surgery and Health Informatics, Director for the Center for Learning Health System Sciences, Associate Director for the Clinical NLP-IE (Natural Language Processing-Information Extraction) Research Program, and Program Director for the Clinical Informatics Fellowship at the University of Minnesota. She serves as the Chief Analytics and Care Innovation Officer for M Health Fairview leading enterprise data technology, analytics, informatics, and its Clinical Variation evidence-based care program. Her research interests are clinical NLP, surgical informatics, learning health systems, enhancing note usage in electronic health records, and improving patient care with health IT, digital including AI/ML capabilities, and patient-centric solutions. Currently she is Chair-elect of the Board of Directors of the American Medical Informatics Association and Immediate Past-President and an elected fellow of the American College of Medical Informatics.

  • Lucila Ohno-Machado
    Dr. Lucila Ohno-Machado

    “Data Sharing and Privacy Protection for Healthcare Data” (video coming soon)

    Presentation Abstract: Sharing data obtained in the process of caring for patients is important for learning healthcare systems and for advancement of biomedical and clinical research. With the increasing use of AI models and their need for large amounts of data, there is increasing debate about the pros and cons of data sharing. I will discuss data sharing needs in biomedical research, risks to privacy, and why obtaining patient consent is an important topic for discussion when sharing electronic health record data is being considered. I will briefly introduce data obfuscation methods and the increased risk when data are linked from different databases.

    Bio: Lucila Ohno-Machado, MD, PhD, MBA, Waldemar von Zedwitz Professor of Medicine and Biomedical Informatics and Data Science, is deputy dean for biomedical informatics and chair the newly created free-standing Section for Biomedical Informatics and Data Science at Yale School of Medicine. As deputy dean for biomedical informatics, Ohno-Machado oversees the infrastructure related to biomedical informatics research across the Yale academic health system. The new Section addresses inequality in health care and research with innovative approaches at the intersection of engineering, technology, and medicine. Its faculty work with scientists exploring fundamental biological principles and physician-scientists implementing interventions that promote health for all. It also leads new studies and data collection initiatives; builds new algorithms and tools; and is the nexus of artificial intelligence (AI) in medicine at Yale. Ohno-Machado is an elected member of the National Academy of Medicine, the American Society for Clinical Investigation, the American Institute for Medical and Biological Engineering, the American College of Medical Informatics, and the International Academy of Health Sciences Informatics.

  • Praditporn Pongtriang
    Dr. Praditporn Pongtriang

    International Perspectives: Panel Discussion (video coming soon)

    Bio: Assistant Professor Praditporn Pongtriang (RN, MSN, Ph.D.) is currently Associate Dean for Research, innovative development, and academic service at the Faculty of Nursing, Suratthani Rajabhat University, Thailand. Dr. Pongtriang has an interest in HIV prevention research, safe sex behaviors, health promotion issues, Emergency care, and AI for improving health outcomes in various groups of the population whose struggles with a barrier to access health care systems throughout Thailand such as non-communicable diseases (NCD) in elderly and emergency care in rural and remote areas. Dr. Pongtriang is currently running research projects focusing on integrating a multidisciplinary, technology-based, and health data system to improve the healthcare system and health outcomes in south Thailand.

  • Ann Marie Ryan
    Dr. Ann Marie Ryan

    “Transparency and Job Candidates: Should Standards for AI Differ from those for Human Decision Makers?”

    Presentation Abstract: High stakes decisions can and should be held to high standards. Interestingly, evolving discussion of standards for the use of AI in hiring contexts focuses on holding AI to the same standards as traditional selection systems in many aspects (e.g., validity, reliability) but seems to suggest more stringent or different demands for aspects related to transparency to key stakeholders, particularly toward job applicants. This presentation will review the nascent literature on explainability, transparency and related concepts for AI use in employment contexts, as well as on job candidate reactions to AI-based assessments, and then draw comparisons and contrasts with the established literature and practice on applicant reactions to selection procedures and guidance on information provided to job candidates.

    Bio: Ann Marie Ryan is a professor of organizational psychology at Michigan State University. Her major research interests involve improving the quality and fairness of employee selection methods, and topics related to diversity and justice in the workplace. In addition to publishing extensively in these areas (she has published over 200 peer reviewed articles and book chapters), she regularly consults with organizations on improving assessment processes.

    Dr. Ryan is a past president of the Society of Industrial and Organizational Psychology, past editor of the journal Personnel Psychology, and former associate editor of American Psychologist and currently serves on the editorial boards of several journals. In 2011 she received the Distinguished University Professor Award from MSU. In 2013 she received SIOP’s Distinguished Teaching Contributions Award and the Academy of Management’s Sage Award for Outstanding Scholarly Contributions to the Study of Diversity. She is a fellow of SIOP, the American Psychological Society, and the American Psychological Association (Divisions 5 & 14). She was awarded the APAGS 2018 Raymond D. Fowler Award for outstanding contributions to student professional development. She also has served on numerous technical advisory boards and committees for federal government agencies, consulting firms, and private industry. She was recipient of the 2021 Michael R. Losey Excellence in Human Resource Research Award and the 2022 MSU Outstanding Faculty Mentor Award.

    Citation: Considerations and Recommendations for the Validation and Use of AI-Based Assessments for Employee Selection, Society for Industrial and Organizational Psychology (SIOP), January 2023

  • Robert Seamans
    Dr. Robert Seamans

    “The Role of Ethical Principles in AI Startups”

    Presentation Abstract: Given the lack of US government regulation of AI development, large incumbent technology firms have codified data usage and AI development guidance, which have quickly become norms in the industry. AI startups have started to adopt these norms to help them manage ethical issues associated with data collection, storage, and usage. This paper explores startups’ ethics-related actions, including ethical AI policy adoption. In alignment with signaling theory, we find that merely adopting an ethical AI policy (i.e., a less costly signal) does not relate to increased performance. However, we do find evidence that investors reward startups that take more costly preventative pro-ethics actions, like seeking expert guidance, training employees about unconscious bias, and hiring certain types of programmers.

    Bio: Robert Seamans (Ph.D., UC Berkeley) is an Associate Professor at New York University’s Stern School of Business and Director of its Center for the Future of Management. Professor Seamans’ research focuses on how firms use technology in their strategic interactions with each other, and also focuses on the economic consequences of AI, robotics and other advanced technologies. His research has been published in leading academic journals and been cited in numerous outlets including The Atlantic, Forbes, Harvard Business Review, The New York Times, The Wall Street Journal and others. In 2015, Professor Seamans was appointed as the Senior Economist for technology, innovation and competition policy on President Obama’s Council of Economic Advisers.

    A current draft of the paper should be ready in a couple of weeks and will be posted to SSRN. A earlier version of Ethical AI development: Evidence from AI startups appeared a year ago at Brookings.

  • Kaleb Smith
    Dr. Kaleb Smith

    “The NVIDIA AI Technology Center at UF and its Impact to the UF Community.” (video coming soon)

    Presentation Abstract: In this talk I will introduce the NVIDIA AI Technology Center (NVAITC) and some background on its structure compared to other NVIDIA orgs. I will discuss the scope of the NVAITC and how it brings together high impact research from the university and corporation. We will then explore some of the projects the NVAITC has been a part of since its creation in 2021. In each of these, I will explain the impact from both a not-for-profit and a for profit industry company and how goal setting was crucial to all the success. I will finish by addressing the way forward and how the NVAITC at UF has been a highlight in Higher Education Research across the globe.

    Bio: Kaleb has a PhD in machine learning from the Florida Institute of Technology focusing on deep learning generative methods for time series data, and a MS in Computer Vision. He currently works at NVIDIA as a senior Data Scientist with Higher Education and Research labs across the country. Specifically, he runs the NVIDIA AI Technology Center found at the University of Florida. There he has trained large language models (BERT 9B models, GPT3 20B models), and worked on AI applications in remote sensing, clinical/research neuroscience, architectural design, radiology, CFD, agriculture, and smart city planning. He worked full time while doing his PhD running an AI prototype lab for a DOD/IC contractor. He grew the department from just himself with no funding, to twelve scientists/developers with $5 million in contracts in two years. Before that, Kaleb was a ML/AI subject matter expert at MITRE, a federally funded research development company in their emerging technologies sector.

  • Emma Spencer
    Dr. Emma Spencer

    “Understanding the challenges and opportunities for Artificial Intelligence (AI) uses in public health, perspectives on developing data governance practices.” (video coming soon)

    Presentation Abstract: The Center for Disease Control and Prevention have invested heavily in the data modernization (DM) efforts of public health institutions at both the federal and state level, providing funding and technical support for DM endeavors. The overall goal is to bolster and unite public health data by building a strong foundation for surveillance, data collection and analysis, accelerate the use of data for action including the use of predictive analytics, strengthening the skills of the public health workforce, improving partnerships, and working on change management and governance activities.

    Advances in cloud computing, “big data” science and informatics including artificial intelligence (AI) and machine learning (ML) can be applied to core public health functions to protect and promote health through predictive analytics of health outcomes within a population and the use of high-volume data for real-time response activities and delivery of health interventions. However, even with the opportunities that AI poses for more precise public health, there are many considerations to factor when using AI and other DM efforts to ensure that there are no misuses of data that may generate harm for individuals or populations. Data governance, ethics, privacy, security, data sharing, data standardization, access, and use are key considerations to ensure minimal risk and maximize effectiveness of these data advances.
    Here I will discuss the initial next steps for the potential implementation of AI and other DM initiatives at the Florida Department of Health, and identify and discuss opportunities, challenges and governance considerations related to their use in the public health arena.

    Bio: Dr. Emma Spencer, PHD, MPH, is the current Division Director for Public Health Statistics and Performance Management at the Florida Department of Health. During the past seven years at the Florida Department of Health, the focus of Dr. Spencer’s work and research has been to provide the public, academia and other stakeholders with the latest public health and vital statistics data while maintaining the highest level of security and confidentiality and modernizing how those data are shared, governed, and analyzed. Dr. Spencer is committed to working with various stakeholders to provide high-quality information from connected and adaptable data systems that drive public health decision making while working to improve the public health workforce and increasing data science and informatics skills.

  • Gregor Stiglic
    Dr. Gregor Stiglic

    International Perspectives: Panel Discussion 1

    Bio: Dr. Gregor Stiglic is a professor and head of the Research Institute at the University of Maribor Faculty of Health Science. Since 2014, he has been working as the Vice Dean for Research at the same faculty, where he coordinates the research work of various research groups. He has led and worked on numerous research projects in digital healthcare and the application of advanced artificial intelligence methods in healthcare since 2003.

    As a visiting researcher, he conducted research at the Center for Data Analysis and Biomedical Analytics (Temple University, 2012), the Stanford Center for Biomedical Research (Stanford University, 2013), and the Usher Institute (University of Edinburgh, 2019).

    In June 2021, he was appointed to the board of the Artificial Intelligence and Medicine in Europe (AIME) association and, in October 2021, to the board of the Slovenian Society for Medical Informatics (SDMI).

    His research collaboration with University of Edinburgh researchers was recognized in March 2021 when he became Honorary Fellow at the Deanery of Molecular, Genetic and Population Health Sciences in the College of Medicine and Veterinary Medicine, University of Edinburgh.

    In the last ten years, he worked as a research project proposal reviewer and panel member for different funding agencies such as EU Horizon Europe (2021), UKRI Medical Research Council (2020), The Netherlands Organisation for Scientific Research – NWO (2018), and The Research Council of Norway (2015).

    Gregor serves as an associate editor at Artificial Intelligence in Medicine (Elsevier), BMC Medical Informatics and Decision Making (Nature Springer), and Big Data (Mary Ann Liebert) journals, as well as an editorial board member at PloS ONE (Public Library of Science), Journal of Healthcare Informatics Research (Nature Springer) and BMC Digital Health (Nature Springer).

  • Nancy Tippins
    Dr. Nancy Tippins

    “Artificial Intelligence and Employee Selection” (video coming soon)

    Presentation Abstract: Employee selection in the U.S. is guided by legal standards as well as case law. In addition, industrial and organizational psychologists have summarized standards for employment testing in the Principles for the Validation and Use of Personnel Selection Procedures, which is based on current research and best practices. These guidelines and standards promote fairness and accuracy in employee selection. The burgeoning use of artificial intelligence-based assessments to make employment decisions raises many questions about the application of these standards. This talk will focus on the ways in which artificial intelligence-based assessments presents challenges to meeting these requirements as well as the areas in which more research is needed to resolve open issues.

    Bio: Nancy Tippins is a Principal of the Nancy T. Tippins Group, LLC, where she brings more than 40 years of experience to the company. Her firm creates strategies related to work force planning, sourcing and recruiting, job analysis, employee selection, succession planning, executive assessment, and employee and leadership development. Much of her current work in the area of tests and assessments focuses on evaluating programs for legal risks, including concerns regarding validity, adverse impact, record keeping, consistency in administration, and uses of test information.

    Throughout her career, Nancy has participated in the creation and revision of professional standards for tests and assessments, serving on the committees that revised the Principles for the Validation and Use of Personnel Selection Procedures for the Society for Industrial and Organizational Psychology (SIOP) in 1999 and co-chairing the committee that revised the Principles in 2018. She was also a member of the committee that revised the Standards for Educational and Psychological Tests for the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME) in 2014 and was a U.S. representative on the committee that revised the ISO 10667 International Assessment standards in 2011. She is currently involved in a SIOP effort to create an addendum to the Principles that addresses tests and assessments that incorporate artificial intelligence.

    Active in professional affairs, Nancy has a long-standing involvement with the Society for Industrial and Organizational Psychology where she served as President (2000-2001). She is a fellow of SIOP (Division 14 of the APA), Quantitative and Qualitative Methods (Division 5 of the APA), the American Psychological Association (APA), and the American Psychological Society (APS) and is an active participant in several private industry research groups. Nancy received her M.S. and Ph.D. in Industrial and Organizational Psychology from the Georgia Institute of Technology.

  • Pierangelo Veltri
    Dr. Pierangelo Veltri

    International Perspectives: Panel Discussion 4

    Bio: Dr. Pierangelo Veltri is a full professor in Electronic and Computer Science Bioengineer at University of Calabria since 1st January 2023. He was professor at University Magna Graecia of Catanzaro, Medical School from 2002 to 2022, where he worked on the application of computer science and bioengineer models to biomedical and omics data. He was President of the computer science and biomedical engineering course at University of Catanzaro from 2021. He worked as adjunct professor at University of Paris XIII, Villetaneuse from 2000 to 2002. He got his PhD at Paris XI in Computer Science in 2002, and served as researcher at INRIA in France.

    He is currently vice-president of the Italian scientific society in biomedical informatics (www.sibim.it) and editor of ACM SIGBIO newsletter. He serves as associate editor for prestigious journals in biomedical informatics such as: Journal of Healthcare Informatics Research, BMC Medical Informatics and Decision Making journal and BMC Medical imaging journal.

    He is coauthor of more than 250 papers and more than 60 journal papers. He was coauthor of an edited book on processes management for healthcare. He teaches health informatics database systems, biomedical instruments, and health process management.

    His main interests are data modeling, protein and molecular modeling, spatial and geographic database systems applied to epidemiology, health informatics and health modeling, protein structure predictions and contact networks.

  • Heng Xu
    Dr. Heng Xu

    “Shaping the Human-Technology Frontier in Responsible Artificial Intelligence (AI)”

    Presentation Abstract: Responsible Artificial Intelligence (AI) is an emerging area of research that has gained significant attention from multiple research communities, e.g., business, computer science, data science, public health, social sciences, etc. Interestingly, many of these research communities have shown significant interests on closely related research problems, albeit from different angles: corporate social responsibility, algorithmic interventions, data governance, regulatory compliance, or digital ethics. Preliminary attempts have been made by researchers in these different fields to define and develop a coherent research agenda towards responsible AI development. The problem, however, is that the picture that emerges is fragmented and usually discipline-specific, with methods and theories that are neither fully intermingled nor systematically integrated. In this talk, I will discuss the opportunities and the challenges of converging knowledge, methods, and data from multiple fields to address scientific and societal need for Responsible AI. I will also explore some of the complex tradeoffs and interrelations between algorithms, data, human factors, and policy. Finally, I will conclude with a reflection on how we could facilitate progress in this space, grounded in our recent work on AI governance and fairness in machine learning.

    Bio: Dr. Heng Xu is a Professor of Information Technology and Analytics in Kogod School of Business at American University, Washington DC, where she also serves as the Director of the Cybersecurity Governance Center. Before joining the American University in 2018, she served as a faculty member at the Pennsylvania State University for 12 years, as well as a program director at the National Science Foundation for 3 years. Her recent research focuses on responsible AI, fairness in machine learning, privacy protection and data ethics. She has published over 100 research papers and has been awarded over $8 million competitive research grants from multiple federal funding agencies including Defense Advanced Research Projects Agency, National Institute of Health, National Science Foundation, and National Security Agency, as well as companies such as Amazon and Facebook.

    Dr. Xu’s work has received many awards, including the American University’s Outstanding Scholarship, Research, and Creative Activity Award (2022), Management Information Systems Quarterly’s Impact Award (2021) for her interdisciplinary privacy research, Woman of Achievement Award in IEEE Big Data Security (2021) for her outstanding research contributions and mentoring women in the field, IEEE ITSS Leadership Award in Intelligence and Security Informatics (2020) for her extensive scholarly and community-building efforts, the Operational Research Society’s Stafford Beer Medal (2018) for her work on healthcare privacy, National Science Foundation’s CAREER award (2010) for her work on digital privacy, and many best paper awards and nominations at various conferences. She has also served on a broad spectrum of national leadership committees including co-chairing the Federal Privacy R&D Inter-agency Working Group in 2016, and serving on the National Academies Committee on Open Science in 2017-2018.

    Xu, H., and Zhang, N. (2022). “Implications of Data Anonymization on the Statistical Evidence of Disparity,” Management Science (68:4), 2600-2618.

    This paper raises a new question on the privacy vs. fairness tradeoff: whether data anonymization techniques themselves can mask statistical disparities and thus conceal evidence of disparate impact that is potentially discriminatory. If so, the choice of data anonymization technique to protect privacy, and the specific technique employed, may pick winners and losers. Examining the implications of these choices on the potentially disparate impact of privacy protection on underprivileged sub-populations is thus a critically important policy question.

    Xu, H., and Dinev, T. (2022). “Reflections on The 2021 Impact Award: Why Privacy Still Matters,” Management Information Systems Quarterly (46:4), xx-xxxii.

    This article discusses the drastic changes that took place in the social, legal, and technological landscape about privacy over the past decade, and how these changes have fundamentally shifted the locus and focus of people’s privacy concerns in the Deep Learning era. This article suggests that the privacy research community would be well-served to revisit our research paradigm and the central tenet of our theories to ensure that they are not decoupled from the phenomena that matter in the real world.