This website uses only technical or equivalent cookies.
For more information click here.

Official journal of

Partner of

Summary

Introduction. Artificial Intelligence has emerged as a transformative force across various industries, with a particularly profound impact on healthcare. It is well known that patients today increasingly turn to the internet searching for information about their medical conditions, utilizing tools like AI-based chatbots. However, information from unverified sources can influence patients’ decisions regarding treatment options. This study aims to evaluate the quality of medical information provided by ChatGPT for Assigned Female At Birth (AFAB) patients considering gender-affirming surgery.
Methods. Given the possibility that some patients might use ChatGPT as an information source for their medical conditions, specific questions were posed to the chatbot in the same manner a patient interested in gender-affirming surgery would. The quality of the information was assessed using the standardized EQIP scale. The survey involved 30 individuals: 15 plastic surgery residents and 15 non-healthcare professionals, with data collected in February 2023 and analyzed using SPSS Software version 28.0.
Results. Separate surveys evaluated the quality of information provided by ChatGPT regarding two primary procedures for AFAB patients undergoing gender-affirming surgery: phalloplasty and top surgery. The quality of the information was found to be adequate in both cases, with significant qualitative differences across the various survey sections. ChatGPT excelled in delivering information in a simple and accessible manner, earning high scores in the “Structured Data” area. However, the “Content Data” area, representing the completeness of information, was deemed sufficient. A significant deficiency was noted in the “Identification Data” section, highlighting the absence of information about revisions, bibliographies, and the names of the entities or individuals providing content.
Conclusions. ChatGPT demonstrated excellent capability in providing information in a straightforward and accessible manner, achieving high scores in the “Structured Data” area in both evaluations. The completeness of information, represented in the “Content Data” area, was considered sufficient. However, a notable deficiency in the “Identification Data” section underscored the absence of details regarding revisions, bibliographies, and content authorship. Although the content score could be improved by adjusting the number and phrasing of questions, the lack of bibliography and source verification remains a significant limitation of this tool. ChatGPT offers advantages such as ease of communication, privacy, anonymity, and overcoming language barriers; nonetheless, given its limitations, its role should always be seen as supplementary to that of the surgeon.

 

INTRODUCTION

Artificial Intelligence (AI) is defined as the study of algorithms that empower systems to perform cognitive tasks such as reasoning, problem-solving, word recognition, and decision-making 1.

AI has emerged as a transformative force across various industries 2, with a particularly profound impact on healthcare. Key subfields of AI, including machine learning, deep learning, natural language processing, diagnostic assistance, and facial recognition, have potential applications in plastic surgery 3-5.

Among AI applications are chatbots like ChatGPT, a generative language model capable of understanding and generating text in natural language. This allows for conversational interactions with patients, addressing health-related inquiries.

With patients’ increasing tendency of internet use for researching medical conditions, ChatGPT is no exception; however, the information obtained – often from unverified sources – may influence patients’ treatment decisions 6,7.

This study aims to evaluate the quality of medical information provided by ChatGPT for Assigned Female At Birth (AFAB) patients undergoing gender-affirming surgery.

MATERIALS AND METHODS

Given that some patients may use ChatGPT as a resource for medical information, questions were posed to the chatbot in a manner consistent with how an Assigned Female At Birth (AFAB) patient interested in gender-affirming surgery might inquire. Specifically, information was sought about two of the most commonly performed procedures in this field: top surgery and phalloplasty.

The quality of information provided was assessed using a standardized scale, the EQIP 8,9, which consists of 36 questions divided into three sections: Content (items 1-18), Identification Data (items 19-24), and Structure (items 25-36). Each item requires a YES or NO response, with each affirmative answer earning one point. The total score ranges from 0 to 36, with a passing threshold typically set around 18.

The “Content Data” section evaluates the quality of information related to medical issues, potential solutions, benefits and risks, and warning signs that patients should be able to recognize. The “Identification Data” section assesses the reliability of the sources, considering factors such as bibliography, revision date, and the name of the person or entity providing the document. The “Structure Data” section focuses on the simplicity and accessibility of the language used, including the use of short sentences, a respectful tone, clear presentation of information, and logical organization.

The survey was administered by a group of 30 individuals, comprising 15 plastic surgery residents and 15 non-healthcare professionals. The data, collected in February 2023, were analyzed using SPSS Software version 28.0 (IBM Corporation; Armonk, New York).

RESULTS

Two separate surveys were carried out to evaluate the quality of information provided by ChatGPT concerning two major procedures in Assigned Female At Birth (AFAB) patients undergoing gender-affirming surgery: phalloplasty and top surgery.

The quality of information was found to be marginally adequate in both assessments, with notable qualitative differences across the survey sections.

The survey on “Phalloplasty” (Tab. I) yielded an average total score of 19/36. Conversely, the survey on “Top Surgery” (Tab. II) achieved an average score of 18/36.

In both surveys, ChatGPT demonstrated a commendable ability to deliver information in a clear and accessible manner, receiving high scores in the “Structure Data” category (9/12 in each survey).

Regarding the completeness of the information provided, as indicated in the “Content Data” section, the results were adequate (10/18 and 9/18).

However, a significant shortcoming was observed in the “Identification Data” section, with a score of 0/6 in both surveys. This deficiency underscores the absence of critical information, including references, bibliographies, and the names of the entities or individuals responsible for providing the content.

DISCUSSION

In recent years, technology’s role in the healthcare setting has advanced significantly, offering among other things innovative ways for patients to obtain information about their medical conditions. As websites and platforms that disseminate medical information continually evolve, there is an increasing need to identify high-quality and reliable sources. Among these innovative tools are AI-based platforms, notably chatbots like ChatGPT.

In this context, ChatGPT emerges as a valuable resource for individuals considering plastic surgery, including gender reassignment surgery.

The terminology used to discuss gender identity and related medical care has evolved significantly, reflecting increased respect for gender-nonconforming individuals. Previously common terms such as “transsexual” and “sex change operations” are now considered outdated and pathologizing. Terms like MTF (Male-to-Female) and FTM (Female-to-Male) were also used but fail to fully capture the spectrum of gender identities recognized today. Contemporary terminology is more inclusive and precise. “Gender-affirming care” now encompasses medical, psychological, and social interventions that support an individual’s gender identity. Terms like AMAB (Assigned Male at Birth) and AFAB (Assigned Female at Birth) acknowledge that gender assignment at birth may not align with one’s true gender identity. “Non-binary” describes identities beyond the male/female binary, allowing broader recognition of gender diversity. “Gender Dysphoria” has replaced “Gender Identity Disorder,” reflecting a more compassionate understanding of the distress experienced by some individuals. “Gender-Affirming Hormone Therapy (GAHT)” describes hormone treatments that align physical characteristics with gender identity.

ChatGPT, utilizing the GPT-4 architecture, comprehends and employs contemporary, respectful language related to gender identity. It accurately details how AMAB and AFAB individuals might seek gender-affirming treatments and acknowledges the importance of inclusive terms like “non-binary.” ChatGPT ensures that discussions are both accurate and sensitive, fostering a more compassionate environment for care. The shift to updated terminology represents a cultural and medical progression towards greater inclusivity and respect for gender diversity 10-14.

The primary aim of this study was to evaluate the quality of information provided by ChatGPT for patients undergoing gender-affirming surgery, focusing on two common procedures: phalloplasty and top surgery. The EQIP scale was used for an objective and standardized assessment. Information was solicited through straightforward questions, similar to those that could be posed by a prospective patient.

Gender-affirming surgery has emerged as the most effective treatment for individuals with gender dysphoria – a condition where one’s gender identity does not align with their biological characteristics – encompassing psychotherapy, hormonal therapy, and surgery 15.

Gender-affirming surgery has been shown in numerous studies to alleviate some of the anguish experienced by patients with gender dysphoria. Following surgery paired with endocrinological and psychiatric care, social life, mental and physical health, and life satisfaction all improve 16.

ChatGPT can be a valuable resource for these sensitive patients, offering information in a non-judgmental and inclusive manner. Gender-affirming surgeries include both genital and non-genital procedures. Major genital procedures for AFAB patients involve penile and scrotal reconstruction, while AMAB patients might undergo penectomy and orchidectomy. Non-genital treatments include facial feminization surgery, voice surgery, and breast enlargement or mastectomy.

Guidelines and standards of treatment, including surgical eligibility requirements, are currently published and reviewed by the World Professional Association for Transgender Health for patients suffering from gender dysphoria 13. In Assigned Female At Birth (AFAB) patients gender-affirming undergoing gender-affirming surgery, two primary interventions are top surgery and phalloplasty. Phalloplasty involves creating a penis-like structure, with goals including an aesthetically pleasing phallus 17, complete urethroplasty without strictures and fistulae 18, tactile and erogenous sensitivity, stationary urination, ability to achieve an erections and engage in penetrative intercourse. Various flaps may be used to perform phalloplasty 17.

To assess the quality of information provided by ChatGPT, the chatbot was queried about phalloplasty, and the responses were analyzed using the EQIP scale. The results, as shown in Table I, yielded a score of 19/36.

Top surgery, or chest contouring, involves breast removal to create a male chest. It can be performed using three techniques: remote incision without skin excision (“keyhole mastectomy”), periareolar skin excision, and double incision mastectomy (DIM). All approaches may result in skin excess and nipple retraction, with aesthetic outcomes dependent on skin contraction 19.

Information about top surgery was similarly evaluated using the EQIP scale, yielding a score of 18/36, as shown in Table II.

Surgeons are crucial in the healthcare system, not only as skilled practitioners but also as primary communicators of medical information, guiding patients utilizing accurate and reliable sources. On the other hand AI is increasingly becoming a reference point for patients 20. This study aims to investigate the quality of information provided by ChatGPT and evaluate its potential role in the doctor-patient relationship. While ChatGPT offers accessible and understandable information, its reliability needs thorough examination to determine if it can be a viable tool for healthcare professionals.

ChatGPT proves to be a valuable resource for individuals seeking information on gender reassignment surgery, combining accessibility, anonymity, and information. Many individuals considering gender-affirming surgery may prefer to seek information anonymously.

Artificial intelligence is gaining prominence in the medical field, with various studies exploring its applications.

Najafali et al. 21 examined how chatbots could be integrated into gender-affirming surgery, highlighting areas for increased adoption such as partnering with gender communities, ensuring privacy, enhancing cultural competence, and providing multilingual support.

Also, Walker et al. 22 assessed ChatGPT’s reliability and found it comparable to static internet information.

Recent studies have also evaluated ChatGPT’s performance in plastic surgery contexts. Xie et al. 23 examined ChatGPT’s responses in rhinoplasty consultations, finding it capable of delivering coherent and comprehensible answers.

Grippaudo et al. 24 assessed ChatGPT’s information on breast plastic surgery, noting its role as a bridge between medical professionals and patients, but highlighting the lack of source verification.

In the present study as well, the “Identification data” section obtained a very low score, with a 0/6 in both investigations.

In this study, ChatGPT demonstrated effectiveness in providing simple and comprehensible information, achieving high scores in the “Structure” category of the EQIP scale. However, significant issues were noted due to the absence of references, resulting in low scores in the “Identification Data” section. This tool remains valuable but requires addressing critical issues.

AI’s rapid advancement, particularly through platforms like GPT-4, holds transformative potential in plastic surgery. While not flawless, GPT-4’s ability to analyze complex cases and provide viable treatment options highlights its promise as an adjunct to traditional care, emphasizing the growing integration of AI in personalized medical treatments 25.

Limitations include the general nature of the questions posed, which may affect the content’s quality, and the lack of source verification. Additionally, since questions were posed in English, the quality of information in other languages may differ. ChatGPT’s ability to continuously learn and update its database means that future assessments may vary.

While ChatGPT offers accessible and understandable information, its reliability needs thorough examination to determine if it can be a viable tool for healthcare professionals.

The integration of AI and digital tools in plastic surgery, such as predictive platforms and deep learning models, is revolutionizing patient care by enhancing surgical planning and outcomes. These advancements, particularly relevant to gender-affirming procedures, emphasize the increasing role of technology in delivering precise, personalized, and efficient medical treatments 26.

CONCLUSIONS

ChatGPT is increasingly employed as a tool for information retrieval across various domains, including medicine. Plastic surgery involves numerous procedures, and for patients undergoing surgeries with substantial psychological effects, such as gender-affirming surgery, anonymous information sources like ChatGPT can be particularly valuable. This study offers a thorough assessment of the quality of information provided by ChatGPT for Assigned Female At Birth (AFAB) patients undergoing gender-affirming surgery, employing the EQIP scale for an objective evaluation. The tool has shown remarkable proficiency in delivering information in a straightforward and accessible manner, attaining high scores in the “Structure” category. The completeness of the information, assessed in the “Content” category, was considered adequate. However, a notable shortfall was observed in the “Identification Data” section, revealing a lack of details about revision, bibliography, and the entity or individual responsible for the content. While the content score could improve with refined questioning, the absence of references and source verification remains a significant drawback. Thus, ChatGPT’s role should be regarded as supplementary to that of the healthcare professional.

Conflict of interest statement

The authors declare no conflict of interest.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author contributions

FRG: A

AP: D

VM: DT

LS: S

AP, VM, LS: W:

RD: O (supervised the manuscript development)

List of Abbreviations

A: conceived and designed the analysis

D: collected the data

DT: contributed data or analysis tool

S: performed the analysis

W: wrote the paper

O: other contribution (specify contribution in more detail)

Ethical considerations

Not applicable.

Declaration of AI and AI-assisted technologies in the writing process

During the preparation of this work the authors used ChatGPT in order to improve readability. After using this tool, the authors reviewed and edited the content as needed and takes full responsibility for the content of the publication.

History

Received: May 31, 2024

Accepted: September 4, 2024

Figures and tables

Question Yes (%) No (%) Response
Content data
Initial definition of which subjects will be covered 100 0 Yes
Coverage of the above defined subjects 60 40 Yes
Description of the medical problem 90 10 Yes
Definition of the purpose of the medical intervention 100 0 Yes
.Description of the treatment alternatives (including no treatment) 60 40 Yes
Description of the sequence of the medical procedure 100 0 Yes
Description of the qualitative benefits 90 10 Yes
Description of the quantitative benefits 0 100 No
Description of the qualitative risk and side effects 0 100 No
Description of the quantitative risk and side effects 0 100 No
Addressing quality of life issues 100 0 Yes
Description of how potential complication will be dealt with 0 100 No
Description of precautions that the patient may take 80 20 Yes
Mention of the alert signs that the patient may detect 0 100 No
Addressing medical intervention cost and insurance issue 0 100 No
Specific contact details for hospital services 0 100 No
Specific details of other sources of reliable information/support 0 100 No
The document covers all relevant issues on the topic 20 80 No
Identification data
Date of issue or revision 0 100 No
Logo of the issuing body 0 100 No
Name of the persons or entities that produced the document 0 100 No
Name of persons or entities that financed the document 0 100 No
Short bibliography of evidence-based data used in the document 0 100 No
The document states if and how patients were involved/consulted in its production 0 100 No
Structure data
Use of everyday language, explains complex words or jargon 100 0 Yes
Use of generic names for all medications or products 90 10 Yes
Use of short sentences 100 0 Yes
The document personally addresses the reader 60 40 Yes
The tone is respectful 100 0 Yes
Information is clear 100 0 Yes
Information is balanced between risk and benefits 60 40 Yes
Information is presented in a logical order 100 0 Yes
The design and layout is satisfactory 70 30 Yes
Figures and graphs are clear and relevant 0 100 No
The document has a named space for the reader’s note 0 100 No
The document includes a consent form, contrary to reccomandations 0 100 No
Table I. EQIP tool results applied to “Top Surgery” information provided by ChatGPT.
Question Yes (%) No (%) Response
Content data
Initial definition of which subjects will be covered 100 0 Yes
Coverage of the above defined subjects 60 40 Yes
Description of the medical problem 90 10 Yes
Definition of the purpose of the medical intervention 100 0 Yes
Description of the treatment alternatives (including no treatment) 70 30 Yes
Description of the sequence of the medical procedure 90 10 Yes
Description of the qualitative benefits 90 10 Yes
Description of the quantitative benefits 10 90 No
Description of the qualitative risk and side effects 10 90 No
Description of the quantitative risk and side effects 0 100 No
Addressing quality of life issues 100 0 Yes
Description of how potential complication will be dealt with 0 100 No
Description of precautions that the patient may take 80 20 Yes
Mention of the alert signs that the patient may detect 0 100 No
Addressing medical intervention cost and insurance issue 0 100 No
Specific contact details for hospital services 0 100 No
Specific details of other sources of reliable information/support 70 30 Yes
The document covers all relevant issues on the topic 20 80 No
Identification data
Date of issue or revision 0 100 No
Logo of the issuing body 0 100 No
Name of the persons or entities that produced the document 0 100 No
Name of persons or entities that financed the document 0 100 No
Short bibliography of evidence-based data used in the document 0 100 No
The document states if and how patients were involved/consulted in its production 0 100 No
Structure data
Use of everyday language, explains complex words or jargon 100 0 Yes
Use of generic names for all medications or products 90 10 Yes
Use of short sentences 100 0 Yes
The document personally addresses the reader 60 40 Yes
The tone is respectful 100 0 Yes
Information is clear 100 0 Yes
Information is balanced between risk and benefits 60 40 Yes
Information is presented in a logical order 100 0 Yes
The design and layout is satisfactory 70 30 Yes
Figures and graphs are clear and relevant 0 100 No
The document has a named space for the reader’s note 0 100 No
The document includes a consent form, contrary to reccomandations 0 100 No
Table II. EQIP tool results applied to “Phalloplasty” information provided by ChatGPT.

References

  1. Bellman R. An introduction to artificial intelligence: can computer think?. Thomas Course Technology. Published online 1978.
  2. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. Published online 2023. doi:https://doi.org/10.1002/ase.2270
  3. Jarvis T, Thornburg D, Rebecca A. Artificial intelligence in plastic surgery: current applications, future directions, and ethical implications. Plast Reconstr Surg Glob Open. 2020;8. doi:https://doi.org/10.1097/GOX.0000000000003200
  4. Liang X, Yang X, Yin S. Artificial intelligence in plastic surgery: applications and challenges. Aesthetic Plast Surg. 2021;45:784-790. doi:https://doi.org/10.1007/s00266-019-01592-2
  5. Sharma S, Ramchandani J, Thakker A. ChatGPT in plastic and reconstructive surgery. Indian J Plast Surg. 2023;56:320-325. doi:https://doi.org/10.1055/s-0043-1771514
  6. Montemurro P, Porcnik A, Hedén P. The influence of social media and easily accessible online information on the aesthetic plastic surgery practice: literature review and our own experience. Aesthetic Plast Surg. 2015;39:270-277. doi:https://doi.org/10.1007/s00266-015-0454-3
  7. Bruno E, Borea G, Valeriani R. Evaluating the quality of online patient information for prepectoral breast reconstruction using polyurethane-coated breast implants. JPRAS Open. 2023;39:11-17. doi:https://doi.org/10.1016/j.jpra.2023.10.015
  8. Moult B, Franck L, Brady H. Ensuring quality information for patients: development and preliminary validation of a new instrument to improve the quality of written health care information. Health Expect. 2004;7:165-175. doi:https://doi.org/10.1111/j.1369-7625.200
  9. Charvet-Berard A, Chopard P, Perneger T. Measuring quality of patient information documents with an expanded EQIP scale. Patient Educ Couns. 2008;70:407-411. doi:https://doi.org/10.1016/j.pec.2007.11.018.21
  10. Nokoff N. Medical interventions for transgender youth. Endotext. Published online 2022.
  11. Standards of care for the health of transgender and gender diverse people. Published online 2022.
  12. Media Reference Guide.
  13. Budge S, Katz-Wise S, Tebbe E. Transgender emotional and coping processes: implications for therapy. Journal of Counseling Psychology. 2013;60:353-366. doi:https://doi.org/10.1037/a0031774
  14. Coleman E, Bockting W, Botzer M. Standards of care for the health of transsexual, transgender, and gender-nonconforming people, version 7. International Journal of Transgenderism. 2012;13:165-232. doi:https://doi.org/10.1080/15532739.2011.700873
  15. Selvaggi G, Bellringer J. Gender reassignment surgery: an overview. Nat Rev Urol. 2011;8:274-282. doi:https://doi.org/10.1038/nrurol.2011.46
  16. Meier A, Papadopulos N. Lebensqualität nach geschlechtsangleichenden Operationen – eine Übersicht [Quality of life after gender reassignment surgery: an overview]. Handchir Mikrochir Plast Chir. 2021;53:556-563. doi:https://doi.org/10.1055/a-1487-6415
  17. Heston A, Esmonde N, Dugi D. Phalloplasty: techniques and outcomes. Transl Androl Urol. 2019;8:254-265. doi:https://doi.org/10.21037/tau.2019.05.05
  18. Santanelli F, Scuderi N. Neophalloplasty in female-to-male transsexuals with the island tensor fasciae latae flap. Plast Reconstr Surg. 2000;105:1990-1996. doi:https://doi.org/10.1097/00006534-200005000-00012
  19. Ammari T, Sluiter E, Gast K. Female-to-male gender-affirming chest reconstruction surgery. Aesthet Surg J. 2019;39:150-163. doi:https://doi.org/10.1093/asj/sjy098
  20. Ahn C. Exploring ChatGPT for information of cardiopulmonary resuscitation. Resuscitation. 2023;185. doi:https://doi.org/10.1016/j.resuscitation.2023.109729
  21. Najafali D, Camacho J, Galbraith L. Ask and you shall receive: openAI ChatGPT writes us an editorial on using chatbots in gender affirmation surgery and strategies to increase widespread adoption. Aesthet Surg J. 2023;43:NP715-NP717. doi:https://doi.org/10.1093/asj/sjad119
  22. Walker H, Ghani S, Kuemmerli C. Reliability of medical information provided by ChatGPT: assessment against clinical guidelines and patient information quality instrument. J Med Internet Res. 2023;25.
  23. Xie Y, Seth I, Hunter-Smith D. Aesthetic surgery advice and counseling from artificial intelligence: a rhinoplasty consultation with ChatGPT. Aesthetic Plast Surg. 2023;47:1985-1993. doi:https://doi.org/10.1007/s00266-023-03338-7
  24. Grippaudo F, Nigrelli S, Patrignani A. Quality of the information provided by ChatGPT for patients in breast plastic surgery: are we already in the future?. JPRAS Open. 2024;40:99-105. doi:https://doi.org/10.1016/j.jpra.2024.02.001
  25. Leypold T, Schäfer B, Boos A. Can AI think like a plastic surgeon? Evaluating GPT-4’s clinical judgment in reconstructive procedures of the upper extremity. Plast Reconstr Surg Glob Open. 2023;11. doi:https://doi.org/10.1097/GOX.00000000000054
  26. Longo B, Cervelli V. Technological innovation in plastic surgery: the role of software and apps. PRRS. 2024;3:1-2. doi:https://doi.org/10.57604/PRRS-535
  27. Moult B, Franck L, Brady H. Ensuring quality information for patients: development and preliminary validation of a new instrument to improve the quality of written health care information. Health Expect. 2004;7:165-175. doi:https://doi.org/10.1111/j.1369-7625.200
  28. Charvet-Berard A, Chopard P, Perneger T. Measuring quality of patient information documents with an expanded EQIP scale. Patient Educ Couns. 2008;70:407-411. doi:https://doi.org/10.1016/j.pec.2007.11.018.21

Downloads

Authors

Francesca Romana Grippaudo - Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy

Alice Patrignani - Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy

Viviana Mannella - Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy. Corrisponding author - viviana.mannella@uniroma1.it

Laurenza Schiavone - Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy

Diego Ribuffo - Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy

How to Cite
[1]
Grippaudo, F.R., Patrignani, A., Mannella, V., Schiavone, L. and Ribuffo, D. 2024. Quality of information provided by artificial intelligence for assigned female at birth patients undergoing gender affirming surgery. Plastic Reconstructive and Regenerative Surgery. (Sep. 2024), 1–7. DOI:https://doi.org/10.57604/PRRS-552.
  • Summary viewed - 63 times
  • PDF downloaded - 12 times