Do you wonder about the difference between coding and programming? Are you new to these concepts?
If you hear the word kernel, do you think about corn before you think about computing? Do you have maths trauma, believe in the ‘geek’ gene, stare blankly at people who mention operating systems, the command line or bandwidth?
Or maybe you think smart phones are ‘magic’ and that a ‘black box’ is an aviation term?
In short, does technology give you the heebie jeebies and you’d rather not think about it?
We need to talk.
This year I started a new job. Like really new. A job my dad doesn’t understand. After almost 10 years working in archives, I came to Australia’s Academic and Research Network (AARNet) with a bag full of 20th century skills like map handling and retrieving paper records from storage.
Before that I worked in a library. Before that I did a PhD, well before Research Data Management was a thing, just as libraries and archives had started digitising collection items. I wouldn’t call myself technical, but after working with digital materials as a curator, I got a little bit of a bug. Later on I heard about ‘Digital Humanities’ and worked on a ‘Digital Treasures’ project, curated some online exhibitions and dealt with some eye-poppingly large archival quality audiovisual files that took days to upload. The potential of digitised collections in research got me really excited.
In 2018, I was lucky to work with the Tinker team, creating the beginning of a digital lab for Humanities, Arts & Social Sciences (HASS) research, and supporting collecting institutions to work more closely with researchers. During that year I learnt the term ‘tech curious’ which described me to a tee.
In early 2019 I started working with the eResearch team at AARNet, with a focus on lifting the digital skills with relation to cloud computing and data management in university libraries, the HASS research community and the broader Galleries, Libraries, Archives and Museum (GLAM) sector. The core of my work is to translate technology to this cohort, and bring them along for a very exciting ride on the big data wave.
Over the last decade, research transformation through new technologies, infrastructure, collaboration and skills (eResearch) has been dominated by investments in facilities, large instruments, data generating equipment, and policy interventions. Cloud enabled services are now a fundamental part of the knowledge economy and the research sector. For students and academics to develop the data science, technology and computational competencies for working with increasingly rich and complex datasets, they must first understand the underlying enabling technical infrastructures. As needs and appetite for data-driven research increases, so does the requirement to acquire a better understanding of how to use the infrastructure already provided, as well as guiding and contributing to the kinds of infrastructure that are yet to be built. But we must start at the beginning, to make sure no-one is left behind.
For the past three years, the eResearch team at AARNet has been working with the library community to explain and explore what research infrastructure is and does. We have shown people how to run speed and ping tests, describing this new knowledge as a kind of ‘infrastructure literacy’. We have demonstrated how network speed and low latency, together with a range of data movement tools and techniques, enable researchers to move data more effectively by sharing several ‘research data movement challenge’ scenarios. We have shown our GLAM colleagues what Jupyter notebooks are, and why they are useful, in workshops for newbies.
As specialists with experience working in research, research infrastructure, digital curation, archives and libraries, we are translating technology through Library Carpentry to help reduce the barriers for uptake. Many of those barriers are technical, e.g. ‘Why use this tool and when?’ but they are also linguistic, e.g. ‘Where did that name come from? Why is this function is called that?’. It has proved useful to talk about the words used in infrastructure, and how we can make them mean something to us. For example, when describing a term used in relation to Jupyter notebooks, such as ‘kernel’, we need to acknowledge that it is word used in computing that is also used in other ways, with other meanings. By using the known meaning of a kernel being a ‘core’ or a ‘seed’ we can help its use as a computing term become clearer. Through taking the time to think and talk about the language barriers in computing for newbies we are aiming to increase their ability to adopt new tools and methods. Jargon busting is front and centre.
Our workshops focus on the acquisition of capabilities over skills. What does this mean? We create a safe, relaxed environment in which the instructor is not the expert but a facilitator. The aim of the experience is to empower those attending with the kind of confidence and curiosity that inspires learning beyond the few hours of the workshop. We are helping researchers and research support staff who do not have computer or data science backgrounds to tackle some of the foundation skills of these disciplines so that they can apply them to their work. More importantly, we are aiming to encourage them to change their views on who has permission to teach these skills (modelling a ‘two minutes in front’ kind of teaching) so that they too can take on the role of facilitator after attending one of our courses.
Since March this year I have been developing and refining workshop materials created by our team, learning how to use GitHub, grappling with a little bit of coding and feeling that uncomfortable feeling of being perpetually out of my comfort zone. That ‘comfortable edge’ (thanks yoga instructor from PhD years!) is where the good stuff happens though. That’s what drives me to demonstrate every time I work with a new group who have put their hands up to go into that discomfort.
I love creating that safe place for them, where there’s no judgement, no stupid question or ‘tech shaming’ (this was new to me, too). We can make it fun and learn at the same time, and we can support each other when we feel like giving up because it gets hard. Finding that growth mindset and encouraging inner voice is key, and re-naming previous ‘failures’ as ‘setbacks’ is a fundamental part of tackling this challenge.
I am so fortunate to be able to be there when people start to rewrite the script for themselves, when they see that ‘I’m not a computer person’ isn’t really a thing anymore, and they can break down those internal barriers to see what they can achieve when they give themselves a chance. I hope you will join us!
We are part of a growing community of educators and trainers supporting and enabling best practices in research, computation, and digital tools so please contact us if you would like to join in. If you are keen to participate in a Jupyter Notebooks skill-building and sharing community, we are keen to talk to you. We are looking at different ways to connect with people from all kinds of backgrounds and interests in data driven research, eResearch support and everything in between. To get involved, go to aarnet.edu.au/communities/research.
PS: The First International Conference on Education and Outreach in Data Science and High Performance Computing is happening in Perth in October 2020. If you’d like to know more about the activity in this sector, pop it in your diary and come along!
Author Bio: Sara King is an eResearch Analyst with Australia’s academic and research network provider, AARNet.