As work continues towards defining and implementing effective policy on the use of AI methods, increasing attention is also paid to Explainable AI (XAI) and connected concepts such as Interpretable Machine Learning.  Recent efforts to disentangle these concepts into specific modalities distinguish between ‘post-hoc’ or ‘output’ explainability and ‘model’ explainability. The former, which entails ‘explaining’ an output from the AI by connecting it to certain features in the input, has received much attention, as it yields methods for assessing, potentially, both inherently black-box models such as various forms of deep neural networks, systems that are deliberately kept from public review through e.g. trade secrets or public secrecy acts, as well as models that are more transparent. However, without model introspection it is often hard to reach general results that are not based on statistical probabilities, which is not always satisfactory. In contrast, the latter form of explainability (also called ‘transparency’) can, under certain circumstances, be used to prove definite properties of the system, such as the relevance or irrelevance of some particular input feature, or compliance with some specific limits on bias.


Another issue that has received additional scrutiny in recent years is resource use and accessibility of AI models. The most successful and well-known current black-box model implementations are all incredibly large-scale, and, in the case of language models, almost solely focused on language with a very large presence on the web. In contrast, languages with many speakers and a large, but non-digitally available literary history, receive far less attention from AI research. Likewise, smaller and poorer organisations may have difficulties accessing AI solutions for appropriate tasks, simply due to the relatively large resource requirements of the types of models that are currently in vogue.

Addressing these and similar issues, some approaches have attempted to leverage human knowledge about the problem structures and/or use cleverly designed data sets in order to let machine learning models arrive at reasonably accurate results using less resources. Often, these approaches are also more transparent than more data-centered ones, by virtue of having more human involvement in their construction. However, for many of the same reasons that AI research prefers languages where there is lots of data to experiment on, it also prefers the models that perform well under those specific circumstances, and these alternate approaches are not all well known, even among researchers, and certainly not among industry practitioners.  

The HMIEAI workshop will deal with heterodox methods of machine learning that offer advantages compared to black-box methods both in terms of transparency and resource efficiency, including reduced need of training data. Within this broad remit, methods that are empirically proven on real-world problems are preferred, though in particular cases, promising benchmark results or theoretical properties are sufficient for inclusion. An important part of the workshop will be to generate cross-pollination between various approaches, exploring where different solutions to similar problems can be combined to further improve the final results and insights. The workshop aims at bringing together researchers from different specialized communities in a joint dialog towards a better understanding of the challenges of transparency, interpretability and use of resources.


The particular focus of the workshop will be the way human intentions and knowledge are encoded in both the initial model (through hyperparameters, network design, direct programming of the model etc.) and the training data (through selection, encoding, curation, feature engineering etc.) and how this contributes to a transparent learned model. An additional focus is on the way this transparency translates concretely to (i) human understanding of the AI system, and thus the use of the system as independently interesting research outputs, and (ii) the opportunity to make absolute claims about the behaviour of the model, e.g. in terms of fairness, bias, and in- or exclusion of certain inputs in its computation.

A final focus will be on various kinds of resource efficiency, in particular in comparison to e.g. Large Language Models and other models that rely on millions, billions, or even trillions of parameters, and/or extensive training on supercomputers. While the models included need not yield bleeding edge results on benchmarks, they should ideally be demonstrated to reach useful levels of accuracy using less resources than a comparable black-box system.