Towards Trustworthy and Understandable AI: Unraveling Explainability Strategies on Simplifying Algorithms, Appropriate Information Disclosure, and High-level Collaboration

Human-centered artificial intelligence (AI) has garnered significant attention. Explainability strategies based on the concept of explainable AI (XAI) are comprehensive sets of techniques and principles that help users establish understandable and trustworthy AI systems. However, existing explainability strategies still face numerous challenges in enabling users to understand AI system decisions better. This literature review aims to explore how to overcome these challenges through simplified algorithms, appropriate information disclosure, and high-level collaboration, thereby offering future research direction for building AI systems that are trustworthy and understandable to users.


INTRODUCTION
AI systems are becoming increasingly opaque due to the complexity of modern machine learning algorithms, the use of large datasets, and the demands of high-stakes applications.One example is the increasing use of deep learning algorithms based on complex neural networks with many layers.These networks can be trained on massive datasets and have millions or even billions of parameters, making it difficult, if not impossible, to understand how the models make decisions or predictions.As a result, these models can become "black boxes" that are opaque to human understanding [31].Moreover, as machine learning models are trained on more extensive and diverse datasets, it can become more difficult to trace the decisions made by the models back to the underlying data [14].Finally, some AI systems are designed to learn and evolve, making them difficult to interpret or predict.As these systems continue to learn and adapt, they may become increasingly opaque, making it difficult to understand how they make decisions or why they behave in specific ways [45].AI systems are increasingly used in high-stakes applications, such as healthcare and criminal justice, where the decisions made by AI systems can have significant consequences for individuals and society.Because the majority of users of artificial intelligence systems do not have a technical background [48], there is an increasing demand for transparency and accountability of artificial intelligence systems, as well as the ability to interpret decision-making methods [74].
XAI (Explainable AI) is a widely discussed set of goals and techniques to establish AI that is easy for humans to understand [10,33].Researchers have developed many explainability methods, approaches, and frameworks based on XAI [19,54,59,78], to help users better understand how systems operate and make decisions.However, in most practices, deploying these explainability strategies focuses on engineers and developers rather than end-users [11].Therefore, it is necessary to study how to establish strategies that enable users to understand AI system decisions better.
This article examines and synthesizes existing literature on explainability strategies using three key dimensions for XAI as a lens.These three dimensions can provide developers and scholars with a comprehensive and broad perspective on how to construct explainability strategies, thereby helping users better understand AI decisions.The first dimension is simplifying algorithms without reducing decision criteria [57]."If the machine learning algorithm is based on a complicated neural network or a genetic algorithm produced by directed evolution, then it may prove nearly impossible to understand why" [12, p.1].The amount of information humans can understand, and the process is limited [55], so transforming complex algorithms into a simpler form may make them easier to understand.The second dimension involves appropriate information disclosure.Considering business secrets, complete disclosure of algorithm code may not be acceptable, but disclosing certain key information, such as summary results and benchmarks, will more effectively communicate algorithm performance to the public [21].The information asymmetry between AI companies and ordinary users can lead to algorithm opacity and accountability issues [47]; Therefore, it is necessary to increase users' understanding of algorithms through appropriate information disclosure.The third dimension concerns high-level collaboration between humans and AI.Participation in interactive machine learning can increase users' understanding of AI algorithms and promote coupling between humans and machines [4].Explanation can be understood as a social process that emphasizes the importance of dialogue [85], which means that effective communication and collaboration among different stakeholders in the AI environment can increase their understanding of AI.By reviewing the relevant literature, this article aims to answer the following question: How can existing explainability strategies help users better understand artificial intelligence decisions through these three dimensions, simplifying algorithms, appropriate information disclosure, and high-level collaboration?
In answering this question, this article will have made the following four contributions: Firstly, a valuable literature review will be provided for the study of explainability strategies.Secondly, by reviewing existing explainability strategies, this article can reveal what contributions existing strategies have made in these three dimensions.Thirdly, this article also discusses some limitations of implementing existing explainability strategies in real-world environments and emphasizes the importance of validating these strategies.Finally, this article provides AI designers and developers with research direction for constructing explainability strategies user-centered in the future.

BACKGROUND 2.1 The challenges of explanation
Although there are many techniques and methods to establish explainable AI, they may still not be able to provide users with an understandable and trustworthy AI system, as they still face challenges such as reliability, comprehensibility, calibration, and BDI (belief, desire, and intention) (Table 1).
Reliability.There are many alternative explanatory methods and techniques, but some of them have defects that may lead to a lack of trust.For example, although LIME (Local Interpretable Modelagnostic Explanations) is widely used in many scenarios, such an explanatory method based on local approximation can only capture the local characteristics of a model but cannot explain its global decision-making behavior [80].Moreover, LIME is unable to explain the impact of the relationship between the features of specific instances on decision-making.Although Shapley is a good method, especially when it is applied to the development of the SHAP (SHapley Additive exPlanations) [52], in a comparative study of LIME and SHAP, their explanations differed in several defect prediction data sets [71].Different and even contradictory explanations might lead to potential risks.Visualization methods can be fragile.A study by Ghorbani et al. [29] shows that adding perturbation to the original data can produce completely different explanations without changing the prediction of the model.To a user, such unreliable explanations may also lead to a loss of trust.
Comprehensibility.Compared to transparency, opacity is a cognitive limitation [22] or epistemic absence [88], which may be caused either by features of AI algorithms and the scale required to successfully apply them [14].Also, opacity may occur because of the inability to determine the reliability of artificial intelligence or the lack of reason to believe in the results of them [22].Cognitive limitation and epistemic absence may be based on epistemic situations (time, status, or process), because: Here a process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process.A process is essentially epistemically opaque to X if and only if it is impossible, given the nature of X, for X to know all of the epistemically relevant elements of the process [40, p.218].
Epistemically relevant elements can be understood as "a step in the process of transforming inputs to outputs, or as a momentary state transition within the system's overall evolution over time" [88, p.269].Thus, the opacity is actually that an AI system does not provide enough epistemically relevant elements to agent X at time t so agent X is not fully aware of how input-output is transformed and how overall transition happens with the AI system.Existing explanatory methods may not provide the full range of epistemically relevant elements for all agents, rather they merely provide some approaches to understanding AI for a few stakeholders, such as AI developers and algorithm engineers.For most agents and stakeholders, such as the non-technical end-users [42], the explanations can only make AI less complex, while not fundamentally addressing cognitive and epistemic opacity.
Calibration.Trust calibration is the relationship between trust and automation capability [46].Over-trust arises when trust is higher than the capability of AI, and distrust will happen when it is lower than the capability of AI.When trust matches the capability of AI, it appears on the diagonal [46, p.55].Therefore, calibrated trust can help users use AI's capabilities properly.Although some studies have provided some evidence about how explanations can help improve trust in AI systems [41,58,87,90], and other studies show that explanations will, in turn, promote users' over-reliance on AI [15,41,59], it is unclear how the explanations support trust calibration [60].Research on how explanations help to calibrate the corresponding relationship between trust and AI capability is still very scarce.
BDI (belief, desire, and intention).According to the statements of Lee and See [46], Ajzen and Fishbein [3] developed a framework for the definition and construction of trust.In this framework, trust is an attitude, while reliance is a behavior.Available information and personal experience can affect the establishment of beliefs.Beliefs and perceptions also affect attitudes.Attitude determines the generation of intention, and behavior is based on intention.Explanations can provide recipients with information that can be leveraged (e.g., rule extraction by limes, Shapley values for different features, and visualization), and recipients of interpretations can also utilize different levels of personal experience (e.g., knowledge, expertise, and cognition) to process this information.However, how such information and human experience construct beliefs and further advance the generation of trust is seldom addressed by explanatory methods.According to Dazeley et al. [19], at present, most of the widely used explanatory methods are at a low level -"purely reactive non-intentional systems" [p.8], and their focus is on explaining a single decision based on features and parameters, not "agent's current internal disposition. ..such as belief and/or desires" [p.9].However, explanations based on decoding beliefs and consciousness need to achieve "social explanation" [p.11] -a higher level of explanation paradigm.The "social explanation" will provide understandable explanations, such as "Why change the meeting time?" and "If I want to raise, how do you plan to play?", which are the scope that the current existing explanatory methods cannot reach out.
In the current research, most techniques and methods in explainability strategies aim to debug parameters, such as a heat map for the results of Convolutional Neural Networks (CNN), rather than consider the requirements of the users [56].The focus of explainability strategies should be extended to how to deal with socio-technical challenges.Such challenges should not only involve overcoming the no-transparency and un-interpretability of black-boxes, but also provide cognitive elements for users and other stakeholders, such as reliability, trust, comprehensibility, and belief.Compared to a large amount of literature on the development and discussion of artificial intelligence (AI) technology and explainable AI (XAI), there is still a significant lack of research aimed at building explainability strategies to help users better understand AI decisions.Therefore, we propose three dimensions for constructing explainability strategies to increase user understanding: simplifying algorithms, appropriate information disclosure, and high-level collaboration.

Three dimensions for better understanding
Simplifying algorithms.Burrell [14] thought the opacity of AI systems is due to the expertise in writing algorithms, which is generally not available to the public.She emphasized that the language of algorithm writing is very different from human language, so the algorithm needs to be explained before it can be understood by most people.A typical example of this opacity is the complexity of deep learning and the low causal inference.Models based on deep learning cannot be easily interpreted.Deep learning is built on relevance rather than causality because "deep learning learns complex correlations between input and output features, but with no inherent representation of causality." [53, p.12].This can prevent people from understanding how input produces output.For example, "in the case of DNNs, it may not be possible to understand the determination of output."[79, p.51].This creates opacity.Besides, because deep learning uses a non-linear structure, deep learning is presented in the form of a black box, that is, deep learning does not explain what makes the model conclude [73].Likewise, the massive number of parameters of the deep learning system makes it difficult for even developers to annotate a complex neural network in an explainable way [53].Therefore, AI designers and developers should provide some methods to simplify algorithms, so that users can interpret the mechanism and principle of complex algorithms simply.
Appropriate information disclosure.For one thing, Pasquale [64] discussed the black box problem in various algorithms.He divided the strategies to keep the black box closed into three categories: real secrecy, legal secrecy, and obfuscation.These strategies describe disclosure insufficiency of information from the perspective that firms protect their business secrets and competitive advantages [14].Due to insufficient information disclosure, users may not be able to grasp the true information of the system, such as the source of data or the probability of errors.Additionally, Grether et al. [32] thought that the intention of corporations to increase competitive advantage and public trust through information disclosure often leads to information disclosure overload, namely, a type of behavior by which information is excessively disclosed.Due to disclosure overload, the public will fail to find information that benefits them because they cannot retrieve and extract the needed knowledge from a large amount of information [61]; The public will fall into boredom and anxiety because they spend more time than what is available to them, which can lead to a distance from their goal [44].Therefore, AI development companies should provide appropriate information disclosure methods and legal regulatory solutions for black-box artificial intelligence systems, so that users have a sufficient and correct understanding of the system's mechanisms.
High-level collaboration.Legal scholars, social scientists, domain experts, and computer scientists should strengthen their partnerships and engage users and the public in discussions with experts on algorithms [14].Companies must interact with consumers to reduce opacity when developing their products [66], and achieving responsible technology design, development, and use requires stakeholder involvement throughout the process [7].Low collaboration increases the opacity of products.This interaction can be understood as value co-creation because consumers trust the products that they create jointly with product developers [66].An example from the research of Prahalad and Ramaswamy is that patients were more willing to follow the treatment plan they had made with their doctors.In other words, as a product, if an AI system is produced in a low collaboration and co-creation environment, it may be a black-box product for users.Therefore, to generate appropriate understanding and trust in artificial intelligence systems, AI development should emphasize collaboration, including human-human and human-machine, and provide opportunities for stakeholders to participate in the development process, especially between non-professional users and professionals.Write the review report

METHODOLOGY 3.1 Literature review and a concept-centric approach
The method of literature review applied in this article draws on eight-step systematic review guidance [62].I have also incorporated the concept-centric approach of Webster and Watson [83].
The eight-step review guide is adopted because it can provide a methodological way to collate the literature and express the connotation of a literature review clearly and objectively.Meanwhile, a concept-centric approach can help me better present the literature in a concise, logical statement and support me with a data basis for subsequent analyses.The description of these eight steps is shown in Table 2.

Eight-step for the literature review
Step 1: Review Purpose.The purpose of this literature review is to sort out explainability strategies for AI systems, and then discuss how existing explainability strategies can help users better understand artificial intelligence decisions through these three dimensions.
Step 2: Protocol.I established a keyword-based search protocol to retrieve relevant literature (Appendix A.1).These keywords are searched in titles, abstracts, and keywords to expand coverage.These keywords include 'explainable strategy', 'interpretable strategy', 'explainability', 'interpretability', and so on.At the same time, to increase accuracy in the search results, I limited results to journals and conference papers.The purpose of screening journals and conference papers separately is to maximize the sample size and inclusion of retrieval.The literature search period is five years (2019-2023), and the language of the literature is English.After the preliminary search results, I will further determine the literature that needs to be reviewed based on the correlation between the literature and the topic through reading.
Step 3: Search for literature.I queried the Scopus database.Scopus includes web tools for keyword retrieval.Through the initial search process, a total of 1057 papers in journals and 1390 in conferences were retrieved.According to Rowe [70], comprehensive coverage in the review is not reasonable."Comprehensiveness can also mean sensemaking, which is also important, especially when a review aims at understanding and viewing a landscape of the accumulated knowledge more cohesively but without exploring all its details and thus does not require completeness in the paper's collection." [70, p.246].Therefore, I identified the top 100 most cited papers in journals and 50 in conferences as preliminary search results (see Appendix A.2).This selection was made to consider the contributions, breadth of application, and impact of the explainability strategies involved in the highly cited literature within this research field.Moreover, the analysis of these literature findings can provide valuable perspective for a broader range of research areas focusing on explainability strategies.
Step 4: Practical Screening.By reading the title and abstract, some literature on specific topics have been excluded, such as literature reviews on XAI and literature unrelated to research topics.I selected 73 papers in journals and 18 in conferences (Appendix A.3).The screening criteria are 1) Papers with a focus on concepts, ideas, and principles for constructing explainability strategies; 2) Papers focusing on the application of explainability strategies in different scenarios.
Step 5: Quality Screening.After further reading the content of the papers, I ultimately excluded 48 out of 73 journal papers and 6 out of 18 conference papers.These papers were excluded because they either discussed a review of existing methods without strategy construction, the application of existing XAI technology in a specific field, or are unrelated to my focus and the research question.Therefore, I retained 25 journal papers and 12 conference papers (Appendix A.4) for further analysis and discussion.
Step 6: Data Extraction.After carefully reading the paper and evaluating its relevance to my review purpose, I extracted data related to the review purpose and kept them in an Excel table.
Step 7: Data Synthesis.Based on the content of the data I extracted in step 6, I divided these 37 papers into three categories based on three dimensions: simplifying algorithms, appropriate information disclosure, and high-level collaboration.Some papers may appear repeatedly as they involve multiple dimensions.Webster and Watson's [83] concept-centric approach was used for further data synthesis.I examined the similarities and differences between each paper, resulting in several review topics.The topics I summarized may not fully cover all the content covered in the papers.They were selected and determined based on my purpose of conducting this review and my concern for the parts with the highest correlation to specific dimensions.
Step 8: Write the review.The final step of the review is to write the review report, which mainly includes a 'report of the review results.'

FINDINGS
Because some papers involve multiple dimensions, out of these 37 papers, 26 papers involve simplifying algorithms, 4 papers involve appropriate information disclosure, and 10 papers involve highlevel collaboration.Most papers contain explainability strategies related to simplifying algorithms, while relatively few papers cover the other two dimensions (Table 3).

Regarding simplifying algorithm
4.1.1Semantic explanation.Semantic explanation is the process of understanding the meaning and context of language or data within a particular domain.It is an important aspect of black box algorithms because it allows us to gain insight into how these algorithms are Global explanation and local explanation Panigutti et al. [63] Casalicchio et al. [19] Giudici & Raffinetti [30] Shankaranarayana & Runje [75] Holzinger et al. [37] Causal explanation and interactive explanation Shin [76] Holzinger et al. [38] Holzinger [36] Frye et al. [25] Weitz et al. [84] Appropriate information disclosure Reyes et al. [68] Professional explanation Amann et al. [5] Legal explanation Buiten [13] Arnold et al. [9] AI supplier explanation High-level collaboration De Bruyn et al. [20] Human-machine collaboration Sachan et al. [72] Liao et al. [50] Feng & Boyd-Graber [24] Ehsan & Riedl [23] Reyes et al. [68] Hong et al. [39] Human-human collaboration Aizenberg & Van Den Hoven [2] Ribera & Lapedriza García [69] Amann et al. [5] making decisions and predictions.Semantic explanations provide a way to shed light on these opaque algorithms by analyzing the data inputs and outputs and interpreting results.To facilitate people's understanding of algorithm decision-making, focusing on the overall prediction of AI is more valuable than analyzing the importance of specific features in the algorithm [6].This provides us with a paradigm for thinking about semantic explanation, thereby shifting our focus from traditional feature analysis.Semantic explanations can also be implemented by embedding attention maps in specific modules of algorithms [26], IF-THEN rules [6], a more natural language rule basis [17], intentional stance explanations [89], or constructing logical structures similar to those used in neural symbol systems [27].Regarding semantic explanation, Vassiliades et al. [81] focused on using the process of argumentation to translate how AI systems make decisions step by step.Although to some extent, this approach is similar to embedding discourse elements into machine learning algorithms [35], they all require more development examples to answer how they are implemented in practice.The key to semantic explanation also lies in a comprehensive review of data, biases, performance, and decision-making [65], which provides people with the opportunity to have a more comprehensive understanding of what algorithms do.
4.1.2Architecture explanation.An overall explainability architecture should encompass both the technical aspects of the algorithm and the many factors involved in understanding and interpreting the results.It is important to define the goals of the explainability architecture.This may include improving the accuracy and fairness of the algorithm, increasing transparency and trust in the decisionmaking process, and providing insights into the algorithm's internal workings.The Temporary Fusion Transformer (TFT) model based on attention architecture developed by Lim et al. [49] can analyze the importance of variables, visualize persistent temporal relationships, and define significant regime changes.Similarly, the framework developed by Kim et al. [43] for text classification also provides a visual approach that is easy for humans to understand.Another explainability architecture involves pruning and compressing neural networks [34,86] to obtain simpler interpretable models.The Cognitive-GAM (COGAM) proposed by Abdul et al. [1] can provide explanations with the required cognitive load and accuracy by combining expressive nonlinear generalized additive models (GAM) with simpler sparse linear models.AlphaStock based on reinforcement learning can construct an interpretable business investment strategy and logic [82].

Local and global explanation.
Local and global explanations are two approaches to interpreting the decisions made by black-box AI algorithms.Local explanations focus on explaining the decisions made by the model for a specific input or instance.These explanations help users understand why a particular decision was made by the model for a specific input.Local explanations can be generated using a variety of techniques, such as LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (SHapley Additive exPlanations), which create surrogate models or perturbations to identify the input features that are most important for the model's decision.Global explanations aim to provide an overview of the model's behavior across the entire dataset or population.These explanations help users understand how the model behaves overall and identify any patterns or biases in its decisions.By using an automatic encoder to modify LIME, stability, and local fidelity can be improved while generating explanations [75].Doctor XAI can provide local explanations to explain the principles behind individual data point classification [63].The combination of tree and local explanation is more helpful for experts to understand model decisions [51].Moreover, two visualization tools that represent the importance of local features, Partial Importance (PI) and Individual Conditional Importance (ICI) graphs, can visualize how changes in features evenly affect model performance [16].Compared to local explanations, global explanations can be generated using techniques such as Partial Dependence Plots (PDP) or Accumulated Local Effects (ALE) plots, which visualize the relationship between a feature and the model's output across the entire dataset.On the other hand, regarding global explanations, models based on the application of the Shapley method in Lorenz Zonoid can provide a unified standard for evaluating the predictive accuracy and explainability of the explanatory variables included in machine learning models.Therefore, it is theoretically easier to explain, but it needs to be validated in more environments [30].

Causal and interactive explanation. Causal explanations aim to identify the causal relationships between the input variables and
the output of the model.If a model is predicting whether a loan application will be approved, a causal explanation could identify the factors that are causing the model to approve or reject certain applications, such as credit score or income.Interactive explanations provide users with a way to interact with the model and explore its behavior in real-time.This approach is particularly useful when understanding how a model might behave under different scenarios or inputs.Interactive explanations can take many forms, such as interactive visualizations or simulations.They can be designed to allow users to adjust the inputs to the model and observe how the model responds.Holzinger et al. [37] suggest introducing causal relationships from human models into AI models to explain the reasons for decision-making because causal relationships can reduce the high opacity of algorithms, improve model explanations [25], and improve user acceptance [76].Causality can also be improved by adding UX to AI to enhance human-machine interaction, thereby establishing transparent interaction and fair algorithms [76].This can become the foundation for establishing a human-AI interface in the future [38].Visual output-based interaction can combine XAI methods with language information provided by virtual agents, helping to increase trust and achieve responsible artificial intelligence [84]."We need interactive Human-AI interfaces that enable a domain expert to ask questions to understand why a machine came up with a result, and to ask what-if questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result" [36, p.175].

Regarding appropriate information disclosure
4.2.1 Professional explanation.As artificial intelligence (AI) becomes increasingly prevalent in various industries, especially blackbox AI, professionals need to promote the correct use of AI to ensure that it is being used ethically, responsibly, and effectively.This could include establishing ethical guidelines and implementing oversight mechanisms.Professionals in specific fields, such as doctors and judges, should participate in the audit and supervision of AI usage, and before AI application, they should comprehensively evaluate the internal functions and working principles of AI [68], to ensure the comprehensibility and interpretability of AI results.

Legal explanation.
Legal intervention in correct usage of AI is crucial to ensure that AI is developed and used ethically, responsibly, and in compliance with legal principles.This includes laws that ensure that AI is used in compliance with ethical principles, such as fairness, accountability, transparency, and privacy.Legal intervention can also contribute to establishing standards for data protection and cybersecurity that AI systems must comply with.
The law needs to be responsible for the regulation of AI, including the acquisition and analysis of relevant data [5].However, it must be clarified that legal regulations, such as GDPR, should not only assume responsibility for AI management but also provide clearer explanations for the development of AI systems, such as the extent to which algorithms should be transparent and how much development costs AI suppliers should pay to bear such transparency [13].
4.2.3AI supplier explanation.AI supplier explanation can play a crucial role in ensuring the correct use of artificial intelligence (AI) by providing transparency and accountability to the development and use of AI systems.AI suppliers can provide transparency to AI systems by disclosing how the AI system works, including its data sources, algorithms, and decision-making processes.This information can help stakeholders, including users, regulators, and other interested parties, to understand how the AI system is being used and to identify any potential biases or ethical concerns.AI suppliers can also establish ethical principles and guidelines for the development and use of AI systems and incorporate these principles into the design and operation of the AI system.AI suppliers can collaborate with stakeholders to ensure that AI systems are developed and used in a way that benefits society as a whole.This includes engaging with regulators, policymakers, and other interested parties to ensure that the AI system is compliant with legal and ethical frameworks.Supplier's declarations of conformity (SDoCs) are a typical example.Usually, such documents are not required by law, but the clear statements related to the purpose, performance, safety, and other contents of AI in the documents can help users implement a gradual inspection of AI to strengthen their understanding of AI products [9].

Regarding high-level collaboration
4.3.1 Human-machine collaboration.Human-machine collaboration can help explain the black-box of artificial intelligence (AI) by providing transparency and explainability about how the AI system works, even if the system's inner workings are not fully understood.Human-machine collaboration can provide context around the AI system, and human-machine collaboration can break down complex AI concepts into simpler terms that can be easily understood by stakeholders.This can include using analogies and examples to help explain technical concepts in a way that is accessible to non-experts, which can help build trust and understanding of AI systems for stakeholders.The transfer of tacit knowledge between humans and machines is crucial to identifying prejudices and errors, encouraging people to trust AI machines more and accept their decisions with more firm beliefs [20].Incorporating knowledge based on belief rules from human experts and users into AI can help build explanatory AI systems [72].XAI-based question banks can help connect users' demand for explainability in artificial intelligence and the technical capabilities provided by XAI [50].Similarly, the Q&A task derived from the popular trivia game Quizbowl emphasizes providing explanations to different users corresponding to their skill levels, so as to further improve the cooperation between human beings and AI [24].Therefore, the key to constructing AI explainability strategies through human-machine collaboration may lie in the interaction between stakeholders and the human-machine interface community [68], and integrating HCI strategies such as value-sensitive design and participatory design into the development of artificial intelligence that places people at the center of technology [23].

Human-human collaboration. Human-human collaboration
can promote open communication and dialogue among different stakeholders, including developers, end-users, regulators, and the public.Human-human collaboration can develop clear explanations of how the black box AI works, including its purpose, inputs, outputs, and decision-making process.This can be done through various means, such as visualizations, user manuals, and technical reports, in which end-users are provided with opportunities to interact with the system.To build an explainable AI system, it is necessary for human-human collaboration to distinguish roles, processes, goals, and strategies in different organizations and AI environments [39].All stakeholders should achieve interdisciplinary and multi-perspective collaboration [5], and effective and quality communication [69].The advantages and disadvantages of systems should be discussed to determine their compliance with social norms [2], which can lead all parties to better understand and trust the AI system.

Response to the research question
Q: How can existing explainability strategies help users better understand artificial intelligence decisions through these three dimensions, simplifying algorithms, appropriate information disclosure, and high-level collaboration?Regarding simplifying algorithm, the existing explainability strategies offer six ways to explain AI: 1) Semantic explanation; 2) Architectural explanation; 3) Global explanation; 4) Local explanation; 5) Causal explanation; 6) Interactive explanation.Among them, causal, interactive, and semantic explanations prefer to explain the mechanisms of AI in a user-comprehensible manner, and local and global explanations are more oriented toward system developers and professionals.Although architecture explanation can simplify algorithms in different ways, whether it can provide an understanding of AI decision-making for users, especially those without a technical background, needs more validation.
Regarding appropriate information disclosure, the existing explainability strategies provide three ways to understand AI: 1) Professional explanation; 2) Legal explanation; and 3) AI supplier explanation.Professional explanations can evaluate, audit, and supervise the content provided by the other two explanations.The legal explanation can provide judicial supervision schemes, while AI supplier interpretation can provide information disclosure of algorithms.
Regarding high-level collaboration, existing explainability strategies provide explanations based on human-machine and humanhuman interactions.Human-machine interaction discusses the information transmission between humans, AI, and other components within an AI system.Human-human interaction focuses on the collaboration among all stakeholders within an AI-centered system, across multiple domains, disciplines, and departments.

Facing the challenges of explanations
Considering the challenges described in section 2.1, simplifying algorithms can make it easier for users to understand the workings and decision-making process of AI systems, thus improving comprehensibility.Although it remains to be further validated whether the strategies involved in simplifying algorithms can fundamentally address the challenge of reliability, providing verifiable explanations through a combination of semantic explanations, local explanations, and global explanations can be considered a promising research direction for future reliable explanation strategies.At the very least, it can provide users with the opportunity to assess the reliability of system decisions.Appropriate information disclosure can increase the transparency of the system, helping users understand the internal mechanisms and decision criteria of AI systems.Improved transparency and explainability can foster trust calibration and aid users in understanding the basis of system decisions.High-level collaboration can facilitate human-machine and human-human interactions, enabling stakeholders to collaborate, share knowledge, and understand AI systems collectively.This collaboration can contribute to improved reliability and comprehensibility, as professionals and users can jointly explore and validate the decisionmaking process of the system while learning and understanding from each other.High-level collaboration also helps address the BDI challenge, as interactions among different stakeholders can reveal the system's beliefs, desires, and intentions, promoting shared understanding and calibration of system behavior.

Potential limitations in implementation
When it comes to simplifying algorithms, the research examined here suggests that researchers need to consider computational complexity, scalability, and performance.Some explainability strategies may introduce significant computational burdens, especially for complex models and large-scale datasets.For instance, semantic and architecture explanations might require analyzing the internals of the model, leading to increased computational costs.Global explanations and causal explanations might necessitate a comprehensive understanding of the behavior of the entire dataset or model, which could become challenging in large-scale applications and result in lower scalability.The effectiveness of interactive explanations partially depends on the feedback and guidance provided by users.However, misunderstandings and biases introduced by human factors during the interaction process could negatively impact the model's performance, which might be unacceptable in certain sensitive applications.
Regarding appropriate information disclosure, professional explanations may require domain-specific knowledge that could still be difficult for the average user to comprehend, thereby limiting the conveyance and understanding of information.Legal explanations involve the jurisdictional aspects of legal frameworks and regulations, which could reduce the general applicability of legal explanations due to regional differences.Relying on AI suppliers for explanations is influenced by the vendors' commercial interests, making it challenging to provide comprehensive and objective explanations.
Concerning high-level collaboration, collaboration among individuals might face challenges in communication and coordination, especially when multiple domain experts are involved, and challenging each other's authority can undermine the effectiveness of communication.In addition, terminologies in different professional fields may set barriers to effective communication.Human-machine collaboration requires the design of effective interfaces and interaction methods, allowing users to comprehend the model's decisionmaking process.Designing user-friendly and efficient interfaces remains a challenge, however, particularly when targeting different user groups.

Validation of strategies
Ensuring the effectiveness of explainability in AI systems is crucial for promoting their sustainability of use.Despite the rich variety of approaches existing in current literature across three dimensions, empirical validation of their effectiveness is lacking.The validation of these strategies can reveal their potential benefits and limitations in practical applications and shed light on their impact on generating appropriate user trust in real-world scenarios.On one hand, the methods for validating interpretability strategies should involve experiments and case studies in various scenarios.By applying different strategies to specific decision contexts through experimental designs and case studies, and collecting user feedback, researchers can assess the impact of these strategies on user comprehension.Furthermore, validating strategies for different user groups with diverse backgrounds allows researchers to understand the pros and cons of different strategy deployments and personalized settings.On the other hand, strategy validation helps researchers evaluate their actual impact on user trust in AI systems.The validation outcomes will guide researchers in selecting suitable strategies in different contexts to enhance user trust and promote selective utilization of AI systems.Additionally, validation processes can contribute to improving strategies in turn, thereby better accommodating distinct user needs and application domains.Therefore, the significance of strategy validation lies not only in strengthening the relevance of explainability theory and practice, but also in advancing the realization of trustworthy and responsible AI.

User-centered explainability strategies
The long-term lack of research on user needs [8] and much research focus on stakeholders within the AI system rather than external stakeholders [18] has led to the necessity to "start from a usercentered perspective" [p.15].In terms of future research directions for user-centered explainability strategies, several potential areas of focus relate to my findings.
One possible direction is to explore ways to combine different types of explanations to provide users with a more comprehensive understanding of AI systems.For example, a global explanation that provides an overview of how an AI system works could be combined with local explanations that explain specific predictions or decisions made by the system.Causal explanations could also be used to help users understand the reasoning behind the system's outputs.Another potential direction for research is to focus on developing more interactive and engaging explanations that use visualizations and other interactive tools.This could help to make explanations more accessible to users who are not familiar with technical jargon or complex mathematical models.Interactive explanations could also be designed to provide users with feedback and opportunities to test their understanding of the system.
Additionally, future research could focus on developing legal frameworks and guidelines for ensuring that AI systems are transparent and explainable, particularly in high-stakes applications such as healthcare or finance.Professional explanations could also be developed to help practitioners in fields such as medicine or law to understand how AI systems are being used and to make informed decisions based on their outputs.AI supplier explanations could focus on providing information about how different AI systems work and what types of explanations are available to users.
Finally, research could also explore ways to improve humanmachine interaction and user experience concerning AI explainability.This could involve developing interfaces that are intuitive and easy to use, as well as providing clear and concise explanations that are tailored to the user's level of understanding.By improving the overall user experience of AI systems, researchers could help to increase user trust and adoption of these technologies.
Explainability strategies are making great strides but "there is still some way to go to meet the expectations of end-users, regulators, and the general public" [77, p.15].Explainable solutions still have limitations in increasing user trust and understanding of AI, as they may only be used as an analytical tool [28].It should also be noted that designing a system that can meet the needs of both experts and ordinary users is a task [67], and there is no interpretable method that can automatically customize explanations for end users in specific fields [p.377].Future work should involve more user surveys.

Research limitation
The literature review in this article investigates literature related to explainability strategies in the AI field, to identify key themes, debates, and research gaps related to the research question.While this approach has some advantages in terms of providing a comprehensive overview of the most influential literature in the field, it also has some limitations.
One of the limitations of this approach is that paying attention to the number of citations may exclude valuable but under-cited literature that could potentially contribute to the research question such as [19,54,59,78].This could be due to a range of factors, such as publication bias, the relative newness of the research, or differences in citation practices across different disciplines or subfields.As a result, the literature review may not provide a fully representative or nuanced picture of the current state of research on the topic.To address this limitation, future research could conduct a more comprehensive search that includes newer or under-cited literature that may contribute to the research question.This could involve using a wider range of databases, search terms, or citation metrics to identify relevant literature.Additionally, interviews with experts in the field could provide additional insights and perspectives on the topic and help to identify emerging research trends or areas of debate that may not be fully captured by the existing literature.
Another limitation is that of a rapidly developing field such as AI and explainability, the sample size of 37 papers may reflect limited responses to the research question.Future research should consider expanding the number and scope of papers reviewed to ensure a more comprehensive literature inclusion of explainability strategies.In addition to journal and conference papers, other resources should also be reviewed, such as AI companies' technical reports, the latest releases from AI developers, and government policy updates on AI development.This may help achieve a more balanced coverage of literature on the three dimensions discussed in this article.

CONCLUSION
I conducted a literature review of existing explainability strategies.My examination has found that existing literature focuses on providing explanations through simplifying algorithms, while there is less emphasis on providing appropriate information disclosure and encouraging high-level collaboration.Therefore, there is a need to strengthen research in these latter two aspects.While more practical validation is required to address the challenges of explanation (reliability, comprehensibility, calibration, and BDI), it can be considered to develop explainability strategies by appropriately integrating these three dimensions.The explainability strategies still harbor numerous potential limitations in their implementation, and there remains a substantial amount of work for validating these strategies in real-world environments.Nonetheless, these challenges also serve as a compass for researchers, indicating the next work of AI explainability strategies.Furthermore, future research on user-centered explainability strategies should consider the following aspects: first, paying attention to user feedback on system understanding; second, exploring how AI suppliers can better provide relevant information about the system from their perspective; third, focusing on interactive AI development between professionals and users; and fourth, customizing explanations for different users.

Table 1 :
The challenges of explanations

Table 2 :
Eight-step for the literature review

Table 3 :
Existing explainability strategies on three dimensions and their topics