Part I: Reflecting upon the question “Do you have to know how to code?”

This seems to be an intense discussion within the field. So intense that Ramsay says that some of his closest friends in the community are “sick to the teeth of this endless meta-discussion” (Ramsay, 2013). Stephen Ramsay is a teacher who teaches others to code, he has devoted his whole life to this, so I can understand his strong opinion on the importance of this skill. 

Ramsay affirms that programming skill is mandatory and argues that if you are not building things, you are not doing digital humanities. Later in On Building he says that “All the technai of Digital Humanities… involve building; only a few of them require programming, per se”. (Ramsay, 2013). The fact is that it is possible to build things even if you do not code. There are many tools at our disposal nowadays that allow us to build, explore, analyse, visualyze without coding, as Ramsay mentioned himself. 

It is an important and valid discussion though. As digital scholars, it is important to understand what coding is about, how far this can take you, which questions it may help you to answer and which problems it could help you resolve. But stating that to be a digital humanist one needs to learn code cannot move away people that do not have this skill, but could aggregate a lot to the field and maybe learn some code throughout the way? This should not be treated as means to an end instead of the end itself?

In their research, James O’Sullivan, Diane Jakacki and Mary Galvin show that there is a generational component behind the divergence of opinions we see in this discussion. They found that the participants, all actively engaged in digital scholarship, who are over 50 tend to do the coding themselves, while younger scholars tend to work collaboratively or have someone else doing the coding of the projects. (O’Sullivan, Jakacki and Galvin, 2015).

This study raises a lot of interesting questions and makes visible the shift that the field is experiencing. While younger scholars say they are not technically proficient, the 50+ age group consider themselves “technologically self-sufficient”. Is the field getting more attention and inviting scholars with different backgrounds? Are younger scholars exploring the most of technology? The fact is that we all know where the “‘traditional’, more isolated approach to research” brought us, now where the “appetite for collaboration” will lead us is still unknown. O’Sullivan, Jakacki and Galvin answer the question with confidence: “You do not ‘have’ to code, as long as you can work—effectively—with someone who does.” (O’Sullivan, Jakacki and Galvin, 2015).

Knowing how to code has uncountable advantages and I agree with Ramsay when he says that we should learn to code, as we should learn any other tool that can help us reach our goals in my opinion. In the dynamic world we live, new tools, technologies and resources are created daily. Those demand time to develop familiarity and start getting something from them. Sometimes the path you chose does not come across the opportunity or the need to learn code and I would say that more important than the ability is to be open and not be afraid of that. In every opportunity learn a little bit more because this will help us to better understand what we are doing or because, as Ramsay said, “it’s fun” (Ramsay, 2013).

References

O’Sullivan, J., Jakacki, D. and Galvin, M. (2015) ‘Programming in the Digital Humanities’, Digital Scholarship in the Humanities, 30(suppl_1), pp. i142–i147. doi: 10.1093/llc/fqv042.

Ramsay, S. (2013) Defining Digital Humanities – Chapter On Building, Defining Digital Humanities. Routledge, pp. 259–262. doi: 10.4324/9781315576251-21.

Ramsay, S. (2013) Defining Digital Humanities – Chapter Who’s In and Who’s Out, Defining Digital Humanities. Routledge, pp. 255–258. doi: 10.4324/9781315576251-20.

Part II: Critical assessment of one peer-reviewed publication or project which utilises computer-assisted techniques.

The References

Gonçalves, R. et al. (2018) ‘Evaluation of e-commerce websites accessibility and usability: an e-commerce platform analysis with the inclusion of blind users’, Universal Access in the Information Society, 17(3), pp. 567–583. doi: 10.1007/s10209-017-0557-5.

Part III: Can machines replace humans in determining the accessibility of a webpage? 

The computers at our disposal nowadays were unimaginable until a couple of decades ago. As technology evolves we are still finding new ways computer machines can enhance our analytical capabilities. As we better understand what questions computers can help us to answer, tools and resources are created enabling new ways of analysing data, text, code,  telling a story or simply seeing a map. 

With advanced capabilities, technology is also known as a means to accessibility. However, when things are poorly designed, the final result may impact directly the lives of people with disabilities who use a particular system and consequently exclude them. We have the means to make all web pages accessible, but for many reasons this is not the reality right now. 

In order to help companies in making their website more accessible, many tools have been created to aid in the task of identifying accessibility issues through scanning the webpage’s source code from the url provided. Those are not 100% accurate nor can be considered as a determination of a webpage/ content accessibility, but they are for sure a great help in the analysis of a code as analysis is facilitated with the tags added by the tool. 

In the paper analysed on part II of this assessment, a tool called SortSite was used to scan a website and provide an accessibility report. Once the automated check was done, a specialist analysed the pages and found that Sortside report contained false positives and false negatives confirming their initial consideration that “automatic tools are effective in the identification of accessibility errors; however, they do not have the same ability to assess the accessibility of a website that a human user has.” (Gonçalves et al., 2018)

Tools like SortSite require human intervention for validation and also to give appropriate meaning to the data provided. This reminds me of Rockwell and Sinclair who say that “computers do not read meaning in a string. They process a sequence of characters.”  (Rockwell and Sinclair, 2016). Computers do not analyse text or code as humans do. They cannot understand the text for us, but they surely can facilitate data, text and code analysis.

To better understand the functioning of a tool like SortSite I performed a few tests using a tool called WAVE. Rockwell and Sinclair resumes that “Algorithms automate tasks through formal description of discrete steps” (2016). WAVE’s algorithms identify the elements associated with the accessibility, for example alt texts, labels, hyperlinks, headings, contrast, etc, and then verifies the values pointing if something different from the expected is found.

Two reports were created, one checking Twitter’s accessibility and the other Facebook’s.  There are in total five categories in the report provided by WAVE. The Errors are failures to meet requirements in the Web Content Accessibility Guidelines (WCAG) which will impact certain users. Contrast errors are texts that do not meet WCAG contrast requirements. Alerts indicate elements that may cause accessibility issues. Features are elements that improve accessibility when implemented correctly and ARIA presents accessibility information for people with disabilities. 

Although the Errors and Contrast errors are very likely to be real problems to users with disabilities, they need to be validated by a human as the report may contain false positives and false negatives. The Alerts may cause accessibility issues which the real impact must be evaluated by a specialist and each ARIA found should be verified carefully because if ARIA is used incorrectly it actually reduces accessibility (WAVE Help, no date).

Twitter – Analysis with WAVE. (Click to open the visualization in another tab)

While Twitter presented 4 errors, Facebook had 11. Facebook, a Social Media that is based on images, had 16 “null” or empty alternative text while in Twitter it was found 14. On Facebook, there were 10 contrast errors and on Twitter this number is 11. On the other hand, while for Facebook 10 alerts were reported, for Twitter this number is 300% higher. I excluded ARIA from the visualization because its quantification does not say anything once each of them must be reviewed by a specialist. This shows us that WAVE found more accessibility issues on Facebook than in Twitter although this last one has more “alerts” than Facebook. 

It is important to say that these numbers are based on my own timeline that was being exhibited at the moment I generated the report, and therefore it is not reproducible. 

Facebook – Analysis with Wave  (Click to open the visualization in another tab)

I compared the results from WAVE with the Google Chrome open-source automated tool called Lighthouse to see if they would point me to the same direction, but it was the opposite. Twitter scored with 73 for Accessibility while Facebook scored 79. 

Twitter – Analysis with Lighthouse

Twitter – Analysis with Lighthouse

This reinforces the need of human intervention when analysing the accessibility of a website. Mentioning an experiment realized by John Searle in 1980, Geoffrey Rockwell and Stefan Sinclair remind us that computers do things that seem to be based on understanding, but are not understanding as humans experience it. Although new tools are created, the need for a human intelligence to analyse that data and give proper meaning to it is unavoidable.

“We will still develop interpretive tools—hermeneutica—that can augment and extend our reading, not replace us” (Rockwell and Sinclair, 2016)

References

Gonçalves, R. et al. (2018) ‘Evaluation of e-commerce websites accessibility and usability: an e-commerce platform analysis with the inclusion of blind users’, Universal Access in the Information Society, 17(3), pp. 567–583. doi: 10.1007/s10209-017-0557-5.

Rockwell, G. and Sinclair, S. (2016) Hermeneutica: Computer-Assisted Interpretation in the Humanities. MIT Press.

WAVE Help (no date). Available at: https://wave.webaim.org/help (Accessed: 6 April 2021).
The database generated from the WAVE report can be found on the following link: Facebook/ Twitter analysis