Part I: Reflecting upon the question “Do you have to know how to code?”

This seems to be an intense discussion within the field. So intense that Ramsay says that some of his closest friends in the community are “sick to the teeth of this endless meta-discussion” (Ramsay, 2013). Stephen Ramsay is a teacher who teaches others to code, he has devoted his whole life to this, so I can understand his strong opinion on the importance of this skill. 

Ramsay affirms that programming skill is mandatory and argues that if you are not building things, you are not doing digital humanities. Later in On Building he says that “All the technai of Digital Humanities… involve building; only a few of them require programming, per se”. (Ramsay, 2013). The fact is that it is possible to build things even if you do not code. There are many tools at our disposal nowadays that allow us to build, explore, analyse, visualyze without coding, as Ramsay mentioned himself. 

It is an important and valid discussion though. As digital scholars, it is important to understand what coding is about, how far this can take you, which questions it may help you to answer and which problems it could help you resolve. But stating that to be a digital humanist one needs to learn code cannot move away people that do not have this skill, but could aggregate a lot to the field and maybe learn some code throughout the way? This should not be treated as means to an end instead of the end itself?

In their research, James O’Sullivan, Diane Jakacki and Mary Galvin show that there is a generational component behind the divergence of opinions we see in this discussion. They found that the participants, all actively engaged in digital scholarship, who are over 50 tend to do the coding themselves, while younger scholars tend to work collaboratively or have someone else doing the coding of the projects. (O’Sullivan, Jakacki and Galvin, 2015).

This study raises a lot of interesting questions and makes visible the shift that the field is experiencing. While younger scholars say they are not technically proficient, the 50+ age group consider themselves “technologically self-sufficient”. Is the field getting more attention and inviting scholars with different backgrounds? Are younger scholars exploring the most of technology? The fact is that we all know where the “‘traditional’, more isolated approach to research” brought us, now where the “appetite for collaboration” will lead us is still unknown. O’Sullivan, Jakacki and Galvin answer the question with confidence: “You do not ‘have’ to code, as long as you can work—effectively—with someone who does.” (O’Sullivan, Jakacki and Galvin, 2015).

Knowing how to code has uncountable advantages and I agree with Ramsay when he says that we should learn to code, as we should learn any other tool that can help us reach our goals in my opinion. In the dynamic world we live, new tools, technologies and resources are created daily. Those demand time to develop familiarity and start getting something from them. Sometimes the path you chose does not come across the opportunity or the need to learn code and I would say that more important than the ability is to be open and not be afraid of that. In every opportunity learn a little bit more because this will help us to better understand what we are doing or because, as Ramsay said, “it’s fun” (Ramsay, 2013).

References

O’Sullivan, J., Jakacki, D. and Galvin, M. (2015) ‘Programming in the Digital Humanities’, Digital Scholarship in the Humanities, 30(suppl_1), pp. i142–i147. doi: 10.1093/llc/fqv042.

Ramsay, S. (2013) Defining Digital Humanities – Chapter On Building, Defining Digital Humanities. Routledge, pp. 259–262. doi: 10.4324/9781315576251-21.

Ramsay, S. (2013) Defining Digital Humanities – Chapter Who’s In and Who’s Out, Defining Digital Humanities. Routledge, pp. 255–258. doi: 10.4324/9781315576251-20.

Part II: Critical assessment of one peer-reviewed publication or project which utilises computer-assisted techniques.

The

  • Can we achieve accessibility by following the guidelines provided by W3C?
  • Is it possible to develop accessible e-commerce websites for blind users by following the W3C guidelines?
  • Is the chosen case study ecommerce website accessible and useful for blind users?
  • These are important and valid questions as the Web Content Accessibility Guidelines (WCAG), which was created by W3C, is “recognized as the international standard for Web accessibility” (Gonçalves et al., 2018). Also, ecommerce websites are more complex than other types of websites, and the question if WCAG also applies for those is relevant to the whole industry, for people working to make the Web a more accessible place for all and for people with disabilities. 

    To answer these questions, it was performed both, an automatic and a manual analysis of the chosen e-commerce website. The whole study comprehended three evaluation stages:

    1. Accessibility:
      1. Using SortSite, a Web accessibility and usability automatic tool. 
      2. Manual evaluation on the accessibility report. 
    2. Usability – Heuristic evaluation by three usability specialists.
    3. User tests with people who are blind

    In the first stage it was analysed the website accessibility based on the verification of the WCAG conformance level. Once they had the results from SortSite, a specialist verified the website manually according to each guideline. Even though some errors found were visible only in the code and therefore easier to be identified with the tool, at this stage, two problems were detected on the SortSite report: false positives (errors detected as correct) and false negatives (correct detected as an error). This confirmed their thoughts that a tool as SortSite cannot assess a website as a human user does and for this reason it should be used as an initial step (Gonçalves et al., 2018).

    Tools for evaluation can be really helpful as they make the task of reading/ evaluating the code, but it is important to understand exactly what these systems can offer and which questions they are able to help us to answer. Tools like SortSite are systems that perform automatic evaluation of the HTML of a page once its URL is inputted. The tool “sweeps” the website checking the guidelines for one or more standards, for example WCAG 1.0 and 2.0 or Section 508. The final result is a report which contains what Gonçalves et al called “deviations” (2018) as well as suggestions on how to correct them. (Gonçalves et al., 2018).

    Although the SortSite tool scans all the pages in the website, in the manual evaluation they analysed only the pages which the users have to interact with to make an online purchase. The results showed that all pages contained “non-conformities with the WCAG 2.0” (Gonçalves et al., 2018) and they noticed that the errors found “posed a serious barrier making user interaction more difficult” (Gonçalves et al., 2018).

    In the second part of the assessment, three usability experts performed an heuristic evaluation of the website. The table used for the analysis had 15 heuristics with respective sub-heuristics, a total of 68 items. Of the 15 heuristics inspected 9 presented problems and of the total of 68 considering all the sub-heuristics, 52 (about 76%) did not have any kind of problem and 16 (24%) had some type of problem. From those, four sub-heuristics problems were classified with grade 1 (Only a cosmetic problem), four with grade 2 (Minor usability problem), three with grade 3 (Major usability problem), and four with grade 4 (Usability catastrophe).

    The third and last part of the study was realized using the qualitative method case study directly applied to 20 users, who were women and men, aged between 18 and 57 years old and were screen reader users, who considered themselves experienced in computers and the Internet. To collect the data they used the methods “logbooks, document analysis, questionnaires, interviews, and direct observation along with the think-aloud technique” (Gonçalves et al., 2018). 

    At this stage the users were asked to perform 6 tasks, from the identification of the website to the selection and purchase of a product. Users had problems identifying the section where social networks are located and finalizing the purchase, the last one experienced by 13 of the 20 users. These results show that the main purpose of an ecommerce website was not achieved by 65% of the users. The users found many challenges and even felt lost when doing some of the tasks. 

    The accessibility problems found in the first assessment and pointed in the heuristics analyses were observed during the user test. For example the lack of tags and attributes in the code was observed when the users went through these elements and did not get appropriate feedback from the screen reader. Although most of the tasks were completed, the most relevant considering the nature of the website was not achieved. Therefore, they concluded that the website was not accessible to screen reader users and that all of the problems encountered would be resolved if the WCAG 2.0 guidelines had been followed. 

    This study shows the end-to-end process of an accessibility and usability website validation. All the methods they used combined provided a clear and effective understanding of the issues, suggestions to fix them and feedback from the users on how they expect a website like this to behave. It reminds us of the importance of following the WCAG guidelines whenever possible and of using appropriate methodology and resources when evaluating a website accessibility and usability. 

    References

    Gonçalves, R. et al. (2018) ‘Evaluation of e-commerce websites accessibility and usability: an e-commerce platform analysis with the inclusion of blind users’, Universal Access in the Information Society, 17(3), pp. 567–583. doi: 10.1007/s10209-017-0557-5.

    Part III: Can machines replace humans in determining the accessibility of a webpage? 

    The computers at our disposal nowadays were unimaginable until a couple of decades ago. As technology evolves we are still finding new ways computer machines can enhance our analytical capabilities. As we better understand what questions computers can help us to answer, tools and resources are created enabling new ways of analysing data, text, code,  telling a story or simply seeing a map. 

    With advanced capabilities, technology is also known as a means to accessibility. However, when things are poorly designed, the final result may impact directly the lives of people with disabilities who use a particular system and consequently exclude them. We have the means to make all web pages accessible, but for many reasons this is not the reality right now. 

    In order to help companies in making their website more accessible, many tools have been created to aid in the task of identifying accessibility issues through scanning the webpage’s source code from the url provided. Those are not 100% accurate nor can be considered as a determination of a webpage/ content accessibility, but they are for sure a great help in the analysis of a code as analysis is facilitated with the tags added by the tool. 

    In the paper analysed on part II of this assessment, a tool called SortSite was used to scan a website and provide an accessibility report. Once the automated check was done, a specialist analysed the pages and found that Sortside report contained false positives and false negatives confirming their initial consideration that “automatic tools are effective in the identification of accessibility errors; however, they do not have the same ability to assess the accessibility of a website that a human user has.” (Gonçalves et al., 2018)

    Tools like SortSite require human intervention for validation and also to give appropriate meaning to the data provided. This reminds me of Rockwell and Sinclair who say that “computers do not read meaning in a string. They process a sequence of characters.”  (Rockwell and Sinclair, 2016). Computers do not analyse text or code as humans do. They cannot understand the text for us, but they surely can facilitate data, text and code analysis.

    To better understand the functioning of a tool like SortSite I performed a few tests using a tool called WAVE. Rockwell and Sinclair resumes that “Algorithms automate tasks through formal description of discrete steps” (2016). WAVE’s algorithms identify the elements associated with the accessibility, for example alt texts, labels, hyperlinks, headings, contrast, etc, and then verifies the values pointing if something different from the expected is found.

    Two reports were created, one checking Twitter’s accessibility and the other Facebook’s.  There are in total five categories in the report provided by WAVE. The Errors are failures to meet requirements in the Web Content Accessibility Guidelines (WCAG) which will impact certain users. Contrast errors are texts that do not meet WCAG contrast requirements. Alerts indicate elements that may cause accessibility issues. Features are elements that improve accessibility when implemented correctly and ARIA presents accessibility information for people with disabilities. 

    Although the Errors and Contrast errors are very likely to be real problems to users with disabilities, they need to be validated by a human as the report may contain false positives and false negatives. The Alerts may cause accessibility issues which the real impact must be evaluated by a specialist and each ARIA found should be verified carefully because if ARIA is used incorrectly it actually reduces accessibility (WAVE Help, no date).

    Twitter – Analysis with WAVE. (Click to open the visualization in another tab)

    While Twitter presented 4 errors, Facebook had 11. Facebook, a Social Media that is based on images, had 16 “null” or empty alternative text while in Twitter it was found 14. On Facebook, there were 10 contrast errors and on Twitter this number is 11. On the other hand, while for Facebook 10 alerts were reported, for Twitter this number is 300% higher. I excluded ARIA from the visualization because its quantification does not say anything once each of them must be reviewed by a specialist. This shows us that WAVE found more accessibility issues on Facebook than in Twitter although this last one has more “alerts” than Facebook. 

    It is important to say that these numbers are based on my own timeline that was being exhibited at the moment I generated the report, and therefore it is not reproducible. 

    Facebook – Analysis with Wave  (Click to open the visualization in another tab)

    I compared the results from WAVE with the Google Chrome open-source automated tool called Lighthouse to see if they would point me to the same direction, but it was the opposite. Twitter scored with 73 for Accessibility while Facebook scored 79. 

    Twitter – Analysis with Lighthouse

    Twitter – Analysis with Lighthouse

    This reinforces the need of human intervention when analysing the accessibility of a website. Mentioning an experiment realized by John Searle in 1980, Geoffrey Rockwell and Stefan Sinclair remind us that computers do things that seem to be based on understanding, but are not understanding as humans experience it. Although new tools are created, the need for a human intelligence to analyse that data and give proper meaning to it is unavoidable.

    “We will still develop interpretive tools—hermeneutica—that can augment and extend our reading, not replace us” (Rockwell and Sinclair, 2016)

    References

    Gonçalves, R. et al. (2018) ‘Evaluation of e-commerce websites accessibility and usability: an e-commerce platform analysis with the inclusion of blind users’, Universal Access in the Information Society, 17(3), pp. 567–583. doi: 10.1007/s10209-017-0557-5.

    Rockwell, G. and Sinclair, S. (2016) Hermeneutica: Computer-Assisted Interpretation in the Humanities. MIT Press.

    WAVE Help (no date). Available at: https://wave.webaim.org/help (Accessed: 6 April 2021).
    The database generated from the WAVE report can be found on the following link: Facebook/ Twitter analysis