Module 10: The Past, Present, and Future of DH

Perhaps because it is still an emerging field, or perhaps it is interdisciplinary in nature, but a thread that I found in this weeks readings was that digital humanities is being critically analyzed more than I have seen for any other specific field. Conversations of intersectionality, diversity, and graduate work are present in a general sense in the realm of academia, but this week’s articles specifically target their role and importance in DH. Of course, a common theme this week and throughout the semester has been “What is DH?”, so perhaps these directed questions are merely the result of growing pains for a field emerging in the midst of a reevaluation of higher education. Without a formal definition, scholars are still working to set the boundaries of DH and establish its tenants and format; being such a malleable subject makes it a prime field for these critical conversations. I believe that DH has the chance to revitalize and revolutionize the humanities and witnessing these conversations occurring is only reinforcing that belief. If we can establish a diverse, critical field that honors the work of everyone, from scholars to tenured faculty to those outside of academia, maybe we can redefine the value of our work and address some of the larger issues in higher education.

In particular, and in a biased sense, I have considered the potential effects of DH on the field of history. I spoke a bit about the loneliness of history in the Slack discussion, and it is something that has been on my mind all semester. Part of why I have loved this class was the Slack discussion. Normally, a seminar meets once a week for a few hours; we all read separately, come up with our handful of ideas and reflections and hope that we can incorporate them into the weekly discussion, then we move past the topic. Any collaboration or discussion outside of that class requires meeting with your peers at another time, which can be difficult given our variety of schedules and commitments. With Slack though, we are not bound to a specific timeframe. Any of us can post at any point during the week and garner a response at any later time. The discussions are not limited to a single thread at a time, or tied within a boundary of a time limit. I believe that Slack has allowed us to flesh out our ideas better, provides us with the ability to reflect on our responses, and share resources in a greater capacity than would ever be possible in a regular seminar.

But beyond one simple platform, I believe that DH can change the way historians do work in many realms. As the field currently stands, scholarship is skewed more towards the privileged; those in tenure-track positions are supported in their journey to produce new scholarship and have greater opportunities to share their works. Adjunct faculty, graduate students, and scholars outside of the university are in a much more difficult position; many lack institutional support and their scholarship is viewed differently due to their position within the academic hierarchy. Especially in the field of history, which values a breadth of original archival sources in the form of monographs, producing scholarship can be incredibly difficult without a network of support. But what if we shifted the field towards more collaborative work, to introduce teams of scholars that utilize their various skills and interests to create more robust and accessible projects?

Digital history projects could be the solution to a more collaborative field. Just reflecting on this class, we each became interested in one or more of the variety of topics we encountered: mapping, network analysis, programming, accessibility, online exhibits, etc. Each of the topics we encountered this week are a small part of DH but each is incredibly useful and important to any potential project. We already are expected to learn a vast amount of methodology within the field; are we also expected to master the plethora of skills available to use through DH? I propose that instead, we create a community of scholars who combine our specific interpretations and lenses with those around us. Rather than focusing on the minutiae of detail and framing to produce scholarship on a specific topic, why not combine skills and knowledge to work together and create more robust analyses?

A collaborative field would also undoubtedly benefit the field by providing channels for non-white scholars, marginalized scholars, and graduate students to produce new scholarship as well. Opening projects to collaboration offers the opportunity for a more diverse set of minds and perspectives. Rather than having a handful of books that each address issues of race, class, gender, labor, politics, etc. but focus heavily on one lens, why not create works that are capable of addressing each topic at length? A shift away from the monograph being the premier form would offer scholars the opportunity to create a more robust and cohesive historiography, and would allow for a more equal presentation of perspectives from marginalized scholars. In terms of graduate students, in many other fields articles and books are co-authored by students with a tenured scholar’s name tied to the work; the work is published with more credibility while the students benefit from being published.

Overall, I think that providing the opportunity for teamwork would allow history to become a less lonely and more productive field. There are many implications to how this would change our work and field: how would specializations work? How would historiography change? How would tenure work? But to create a stronger field and stronger community that benefits not only the scholars at the top, but also those aspiring to dedicate their lives to the craft but who are too marginalized to have their voices heard, I think that collaborative work through digital history is an admirable goal.

Module 9: DH and Pedagogy

Education is very near and dear to my heart. Obviously I have dedicated my life to learning, having earned two degrees and starting a third. I am engaged to a teacher, my mom is a professor, various other family members are in education, and I started my undergraduate degree in education. I have always been interested in educational theory and pedagogy, even as I left the field for history, so I have become very interested in history pedagogy. I am enamored by Using Digital Humanities in the Classroom, and intend to read it fully through. But with what I have read so far, I can’t help but keep coming back to the arguments against implementing digital history skills.

I think overall I have come to operate under the assumption that history courses should not be about the content, but about the historical skills. This may seem obvious, but then why are so many of our history courses centered around geography and content? Especially intro level seminars in history, whose titles imply learning an incredibly broad scope of historical fact. I suppose we can’t include the various historical skills in every seminar title, but I think it then falls on us as educators to set the tone of a course to not be simply about content.

As Claire Battershill and Shawna Ross point out in their first chapter, there are many oppositions to specifically digital history being implemented in classrooms. On a personal level, educator’s may believe “There’s not enough room in the syllabus to teach new skills” (25). Battershill and Ross point out that this argument falls flat when you consider that the alternative — teaching hundreds of years of history in a single class — is also unrealistic, and instead offer that the small amount of content sacrificed in favor of learning DH skills would be beneficial. I have to agree with the authors on this point. If history is about teaching skills like critical thinking and encouraging students to draw patterns between events rather than just memorizing dates, then why should digital history be excluded from that skill set? Instead, content should be the framework within which we teach these skills, rather than being the learning objective in and of itself. Start with the skills you want to teach, then move towards how you can utilize the content to encourage those skills. It may seem backwards, but I think it will be ultimately more beneficial for students.

Under the next section about faculty arguments against DH skills, Battershill and Ross say “we have enough trouble teaching our own curriculum as it is” is a common pushback (27). The authors make the argument that this is a prime time to consider the broader objectives of the entire curriculum, which I think aligns with my broader point as well. What is the point of earning a degree in history? Is it learning historical facts, or learning to critically think? Even if students don’t learn historical content, they will be learning important skills necessary for any field: literacy, critical thinking, basic technological competence. If a department is struggling with its curriculum, perhaps it is time to reevaluate the purpose of the curriculum and redesign it to benefit students and instructors alike.

From a student perspective, I can understand why introducing digital history skills may seem irrelevant; many students don’t consider the ways that a class is designed from the other side, they only consider the workload they have. In a class that introduces multiple digital history skills that require work to learn, rather than simple content regurgitation, I think Battershill and Ross’ assertion of “I wanted a normal class. What does this have to do with what I signed up for?” is a common one (28). We can rectify that by being transparent with students about the skills and how they can be relevant. Again, we can consider the historical facts to be a framework rather than the end goal; they are a conduit to learn the skills through. But memorizing dates is not the important part. Learning how to find sources, compare and contrast arguments, critically think, and utilize technology as a tool are all lessons that go beyond history and will be relevant in many different settings. If we tell the students this, they likely will still not want to do work; but they will understand that there is a purpose behind what we have them do rather than it just being mindless work. I think that is one of the hardest parts of education, is helping students to see the purpose of what they are doing. This is why I love Shannon Kelley’s description of student responses to her course in “Getting on the Map”: the students clearly state that they loved seeing their work live on past the semester they were in class and that the assignment didn’t just end at the professors desk. Collaborative work with lasting meaning inspires more motivation in students than one-off assignments or essays that have no deeper meaning.

I know that we covered a lot more this week, and I’ve talked more about the technical exercises in Slack, but I really got caught up with justifying the need for DH skills in classes. Teaching is not just about transferring information. For a better education, we need to be focusing on providing lasting skills and knowledge that extend beyond the classroom and semester, and the best way to do that is to create work that matters to students and exists beyond those boundaries.

Battershill, Claire and Shawna Ross. Using Digital Humanities in the Classroom : A Practical Introduction for Teachers, Lecturers, and Students. London: Bloomsbury Publishing Plc, 2017.

Kelley, Shannon. “Getting on the Map: A Case Study in Digital Pedagogy and Undergraduate Crowdsourcing.” Digital Humanities Quarterly 11, no. 3 (2017).

Module 8: Digital Sustainability and Preservation

I have learned about preservation issues in the past, but this week opened my eyes to the vast expanse of what preservation truly entails. Most notably, I have come to recognize a distinction between the physical preservation of an artifact versus the intellectual preservation of the information it represents.

Preservation seems to be actions taken to extend the usefulness of an item, artifact, or information beyond it’s original lifespan. In terms of paper and books, this could include de-acidifying paper or storing them in a cool environment. In terms of architecture, this could mean rebuilding portions of a building with similar methods to imitate the integrity of the original. But in both of these cases, the preservation method is about perpetuating the integrity of the original, to remain as true as possible without changing or creating a copy.

In terms of information though, preservation seems to be less about the artifact and more about the content. In these cases, the common form of preservation is copying: whether it is copying the text of a book or updating a file format, information relies on creating an imitated copy rather than attempting to maintain the original. It seems like a subtle difference, but it is important. It becomes a question of what is important about the object: does it draw its significance from its original form, or does the information hold the inherent value?

As we discuss digital preservation, I feel like in many cases, the latter is true. The information that we create digitally is more important than the form that it is created in. I think music is a clear, simple example. Obviously musical hardware has changed vastly in the past century. We have gone through records, cassette tapes, cd’s, mp3 players, iPods, and most recently, simply streaming files. When preserving music, I do not think I would consider preserving the analog format of the music; the grooves on the record are not the part of the record that hold the value, it is the sound. By recreating the sound onto a new format, we are able to preserve the music even if we can’t preserve the physical record.

In a less theoretical sense, I was also intrigued by the variety of actions that we can take to actively sustain and preserve our digital content. Some of the actions included in the “Levels of Preservation” site and “The Socio-Technical Sustainability Roadmap” seem obvious, but are not things I had considered before, such as having multiple copies, in multiple formats, in multiple locations to avoid losing any information in a single incident. Other actions were things I had never considered. Specifically, the metadata section of the “Roadmap” blew my mind. We have talked about the Level 1 metadata, which is keywords and descriptions of the content for identification. Even Level 2 seems normal, documenting file sizes and formats. But Levels 3 and 4 intrigued me, they seem to be increasingly higher tiers of metadata about the metadata. Level 3 is documenting what technologies created the files and when, while Level 4 is metadata about how the files have changed over time. Logically, this makes sense; if at any point the files are corrupted or lost, each level of metadata is a logical step to diagnosing what could have gone wrong. I was just very intrigued to see all of these details laid out in a specific plan, and this was only a small portion of one of the steps.

Digital sustainability and preservation is another one of those topics (which, I suppose each topic in this class could be in reality) that could be the topic of an entire class in and of itself. The incredibly thorough guides that preservationists have created are incredible and impressive, especially in the sense that they are both specific and broad enough that they are still relevant after new technologies are introduced. Having lost and recovered (or not recovered) many files in my lifetime, I think I will have to take a step back and use some of these methods to try and reflect on my own files before I potentially lose them all in some way.

Module 7: Accessibility and Data Visualization

I think this week’s work has intimidated me more than any other week so far this semester. Accessibility itself is not a terrifying subject, but I think my absolute lack of experience and knowledge is most visible with this topic. Every other topic I had at least a passing familiarity with, but I know very little about accessibility, which means I have the farthest to grow to become better at it. The concept of accessibility is also not built in to any of our other training as historians. We are not typically trained to use different methodologies (except perhaps interdisciplinary historians) to reach different audiences or to write the same analysis in multiple ways rather than just in book form. The whole field feels rather standardized, which hardly leaves room for the flexibility required of accessibility.

Of course, this doesn’t mean the field can’t be accessible. As we saw in this weeks readings, there are hundreds of tiny ways that we as historians, and even just as members of our communities, can make information more accessible. In terms of research, simple and straightforward data visualizations such as the Economic Output in US Counties (https://public.tableau.com/en-us/gallery/economic-output-us-counties?tab=viz-of-the-day&type=viz-of-the-day) or Snowfall Extremes (https://public.tableau.com/en-us/gallery/snowfall-extremes-record-snowfalls-united-states?tab=viz-of-the-day&type=viz-of-the-day) can make information easy consumable. They are straightforward, not overwhelming with numbers or colors or different variables, which makes the information more accessible. As I toyed around with my QGIS map, where I measured how much a democratic vs republican representative won the vote in specific districts, I compared my color scheme to those in ColorBrewer. I had created a scale of red to blue, with purple in the middle being the transition, so that shading could show extremity. After reading “How to Design for Color Blindness” though, it seems that even though all three kinds of color blindness can seemingly distinguish between the 3 colors, it could become much more confusing or less effective to use such a color scheme. Instead, I decided that the ColorBrewer scheme of red-blue with white in the center would be more effective at portraying the same message with less chance of confusion.

I was also appreciative of the chance to utilize the WAVE tool to analyze my own website’s accessibility. Since I hardly know anything about web design, I got too caught up just trying to do the bare minimum to set up my site and didn’t consider accessibility. Something that I found interesting with my WAVE analysis was the presence of two menu links leading to the same page. I have my About page set as my Home, but there are still two tabs. I see now how that can be confusing if you are trying to click on either and the page doesn’t change. An area of real growth that I want to eventually learn about is understanding how page readers interpret documents and web pages. WAVE pointed out that some of my headers could potentially be an issue, even though they looked fine to me. But, perhaps a page reader would not be able to identify the header the right way.

Another point that I considered earlier in the week actually became the main theme of the last article I read about design thinking. Historians’ work is typically in monograph form or journal articles locked behind a paywall which we have discussed. While the paywall is a separate issue, I think that historians need to address the ways that we produce our analyses. While monograph are an effective way to express an analysis, they are currently largely inaccessible to non-historians (and can even be difficult for some historians). Instead, I think we should take a field from design thinking and begin to produce work with a broader audience of consumers in mind. I think that infographics, like the ones we saw on Tableau Public, could revolutionize the way that historians present their data.

Consider this: You sign up to present at a conference or are asked to give a presentation at your local library on your most recent research, a published monograph. Rather than talking for half an hour at a blank-faced audience or flipping through slides that people’s eyes glaze over, you provide handouts of an infographic which summarizes the main points of your research. This gives the audience a document to take with them and remember your research with and makes the information more consumable by providing a framework for your study, allowing the audience to tie the new information you provide to the data on the infographic. The infographic could deal with actual data, like many of the maps on Tableau Public; it could include maps, like the route of the Napoleonic invasion of Russia; it could even include word maps to show how subtopics are represented throughout your book. It doesn’t have to be specifically data as in statistical analysis. By providing such a handout or image, from an accessibility and pedagogical standpoint, the audience is much more likely to understand and retain the information provided.

Module 6: Ethics, Biases, and Diversity in a Digital World

Going into this week, I had a vague understanding of how algorithms worked and the extent to which personal data was used by corporations to tailor our experiences online. The readings for this week widened that understanding though, and if this year wasn’t already a conglomeration of tragedy, I would most likely be angry about it, but I am too exhausted to be mad. On top of that though, I think I just accepted that my data was nearly open content years ago. Edward Snowden blew the whistle on the NSA when I was in high school, and that is a distinct memory in my mind. I think at that time I just accepted that privacy wasn’t really a thing on the internet. Although there is infinite amounts of data that makes my posts and data irrelevant in the grand scheme (like people worry about the government spying on them, but I don’t worry, because who am I to the government? They don’t care about me). But now, realizing that every aspect of who I am as a person is documented in some way based off my behavior online and in real life, I realize how terrifying that can be. At the same time though, I can’t help but just accept my place as a data point in the world, because what can we do to rage against the machine of Big Data?

A distinction that stood out for me this week is that algorithms function in (at least) two ways: as a function of input data creating output data in a general form, and as a function of your specific input data creating a specific output data for you. When I tried to explain the Twitter algorithm to my dad a few weeks ago, he couldn’t understand how the algorithm could be explicitly favoring white faces over Black faces. He thought that it depended upon your own activity and didn’t understand why it didn’t change from person to person. In this case, it would be the kind of algorithm talked about in “Erasure, Misrepresentation and Confusion” or “Algorithmic Accountability: A Primer” that is designed to produce a specific output (in this case, the input is merely seeing the picture on Twitter). Each of these cases pointed out glaring flaws in how algorithms operate. In the case of JSTOR, it is clear that the algorithm designed to capture topics is flawed because it cannot recognize topics in the same way that researchers think; in other words, it cannot anticipate broad themes in an article the way a researcher can, it can only recognize specific words. Trying to identify an article as a sum of its parts only goes so far. A solution to that would be consulting researchers when designing an algorithm that parses their work, but that would be a costly (assuming researchers were paid) and lengthy endeavor to create a new algorithm. But also, something like parsing a broad expanse of human knowledge for specific tags is an incredibly complex task even for a computer.

On the other hand, the COMPAS algorithm was fed “bad” data when it was trained. Although we were able to interpret that the outputs were “incorrect,” it was not the fault of the machine; it did as it was trained. Instead, it is a reflection of the justice system itself, making clear a pattern of unfair incarceration. In that sense, the COMPAS algorithm acted almost as a diagnostic tool, performing a similar function to researchers crunching large data to find patterns. The only difference there is that the pattern that was analyzed was produced by a machine, not a person; but the key point is that the machine is incapable of making decisions on its own.

The other type of algorithm though, as outlined a bit in the Primer and in “Open Data in Cultural Heritage Institutions,” is personalized with your own data. This is the kind of algorithm that people are more familiar with; this is the one that shows you ads for that shirt you just considered buying on all your social media platforms. This is also the kind of algorithm that scares me. I’ve manipulated the algorithm before to my advantage, knowing that if I shop for something it will give me deals or looking for something related to a product I hope exists so that the algorithm can find me what I am looking for. But what I didn’t realize was that the data goes far beyond shopping habits or what pages or tweets to show based on your interactions.

Despite being a historian, trained to research and analyze and draw connections that imply, I hadn’t considered that every action we take as people can act as a reflection of us as people. What struck me particularly was “gamblers” being listed in “Open Data in Cultural Heritage Institutions.” If a company knows you gamble, it is easy to make an assumption about their character. Are they reliable with money? Do they have an addictive personality? Can you trust them? These all seem like stretches based on one fact, but when added to other data about other gamblers, companies can draw statistical conclusions to these questions and make assumptions based off those statistics. For me, I have recently bought parts to build a computer and I tweet about games a lot. Therefore, a company could classify me as a gamer, which could then reflect on who I am. Am I dedicated to my work? Do I think logically or abstractly? Am I professional, or does gaming make me unprofessional? Even something so innocent as a hobby could reflect on hiring me for a job based on these tangential questions that can be answered through statistics.

I will cut off my post here as it is running long, but I have a lot to think about regarding algorithms and my personal data. Although as I mentioned before, it all feels hopeless. I can’t protest a system that defines modern life, and I am just a small data point in the world. The best I can hope for is that our technological overlords are ethical in their use of data, and there is yet to be a sign of that being true.

Week 6: A Proposal for Video Games as Educational Tools

I thought I was excited about the GIS unit, but when I saw this unit I realized this was going to be my favorite week of the semester for sure. I have loved gaming my entire life and have played plenty of historical games. I have rarely considered how those games have taught me about history though. As I reflect on that thought, I realize what a useful tool videogames could be. My first exposure to video games in an academic sense was during my masters program, when an American cultural studies student gave a presentation about the way that the Vietnam war was portrayed in media, which included gameplay from Call of Duty: Black Ops. He talked about how the game portrays historical memory and PTSD; topics that had always been underlying in the story but that I had never considered. I began considering how historical memory can shape narratives of the past after this presentation, not the least in which game developers manipulate historical narratives to create plotlines for games. An interesting example I had considered lightly before this was Assassin’s Creed. The second through fourth Assassin’s Creed games follow a man named Ezio Auditore, a real historical figure whose family was executed for treason, but when the rest of his family was killed, he disappeared from the historical record. This allows for the developers to create an entirely new story around this character, utilizing existing historical records and creating an alternate history that he can be placed into. While many of the historical locations, events, and people are realistic and factually based, it is obviously not a true historical account. This became a theme that the series has made a cornerstone of its development: utilizing true historical events to create a fantastical plotline of shadowy, metanarrative conflict in the vein of conspiracy theory. I fleshed out my hope for a more accurate history in a slack post with Terence, where I describe that I believe history games can become a tool for learning, rather than educational sources in and of themselves. As the Bloom et. al article points out, the video game industry has surpassed Hollywood for revenue, and over half of US households have hardware to play video games and use them daily.[1] Video games have a direct line into people’s lives, and a thriving community that can shape cultural memory. John Lanchester provides a powerful example in a recent article in the London Review of Books, in which he mentions that there is a play from an esports match of League of Legends that anyone who watches the play remembers, giving people a sense of shared memory and community surrounding the game.[2] Similarly, in my own experience, I have made friends and bonded over the Overwatch League, an esports league for a team-based first person shooter. Yet another example occurs in Terence and I’s conversation, as we both recollect our experience with the traumatic mission “No Russian” in Call of Duty: Modern Warfare.[3] The mission was so shocking that everyone who played the game remembers their experience playing it over 10 years later. If educators and game developers can find a way to manufacture moments of collective memory such as these, extraordinary moments in video games can become learning tools that redefine how players approach specific historical topics.   A useful example could be The Oregon Trail, which we played this week. I noticed some of my peers pointing out that the way Native Americans are portrayed in the game is extremely reductive and slightly racist. As a fairly simple game, and one that many people who grew up in that era would have played, it could be a great tool to utilize in class. Teachers could ask students to consider how the characters of the game are portrayed compared to reality, or use it as an example of representation, or teach revisionism by prompting students to consider how they would change the way that Native Americans were portrayed based on evidence they found outside of the game. While a very minor tool, the game has the potential for multiple different lessons not just about the content of history, but the way that history is conducted professionally. To conclude, I don’t think that video games are inherently educational in regards to history. Sure you can learn some historical facts and gain some basic historical knowledge, but that is not the core of history. Historical analysis requires taking a step back and critically questioning the content and how it is presented. But is that not what we do with primary and secondary sources anyway? We don’t claim that primary sources are inherently history, and it is the practice of historians to critically analyze prior historians’ work. Video games are merely a new source of information and narratives that deserve to be used as learning tools, not disregarded.


[1] Krijn H.J. Boom, et. al, “Teaching through Play: Using Video Games as a Platform to Teach about the Past,” in Communicating the Past in the Digital Age: Proceedings of the International Conference on Digital Methods in Teaching and Learning Archaeology, ed. Sebastian Hageneuer (London: Ubiquity Press, 2020), 29. https://www.jstor.org/stable/j.ctv11cvx4t.8

[2] John Lanchester, “Getting into Esports,” London Review of Books 42 no. 16 (August 2020): 5-6.

[3] Robert Carlock and Terence Viernes, #module-05, Clio Wired Slack discussion board, October 2, 2020.

Week 5: Copyright and Open Access

Preface: My blog post this week will be more me hashing out some ideas than stating what I learned. I have been having some complicated thoughts about the nature of copyright and its effects on academia. I started writing one thing and my mind just kind of ran with it and now it all seems like a long ramble, but seeing as this isn’t a formal paper, I decided to just roll with it.

After looking over this week’s content, I have come to the conclusion that in terms of academic writing, copyright is more of a hinderance than a benefit. Academia runs on researchers sharing their ideas, so that each person can build on, contribute to, or refute the work of another. The idea of ownership over an idea becomes fairly fluid, but at the same time major innovative ideas are attributed through citations, which give the creator of an idea their due. While I suppose someone could steal an idea and face no legal consequence without copyright laws, I feel that generally the absence of copyright would do little to hinder academia.

On the other hand, copyright restricts the flow of knowledge due to publishing companies, which we saw in the readings about open access materials. Academic researchers don’t have the means to publish themselves, so they are forced to give their ideas to a company to publish; in this sense, copyright probably protects academics from these companies stealing their ideas as their own (similar to the way that copyright protects academics from their universities stealing their work). But because they are forced to have someone else publish it, near monopolies have been formed that keep that information behind a paywall. Copyright is meant to protect an author’s ideas, but not from the audience they intended to share it with. And with copyright laws in place, it is illegal to mass reproduce the content from the publishers as a way to get around the paywall. Therefore, while copyright is supposed to protect ideas from being stolen, copyright laws unwittingly restrict the ideas from even being shared at a reasonable rate.

I know that copyright isn’t necessarily the cause of these issues, but overall I think copyright and open access are still two of the core issues here. Restricting the flow of information in an environment that is inherently about sharing that information seems counterintuitive. I think open access is moving us in the right direction. Maybe the conversation shouldn’t be about whether we own our ideas or not; the conversation should be about how we can share our ideas. And open access provides an introduction to the way the system could look if we rebuilt it from the ground up. I think that this also ties to some of the points we made in our first week, about how history specifically needs to learn to be more collaborative rather than individualistic. I think if we shift our expectations away from individual work for tenure, and shift them more towards collaborative, accessible work, not only will we benefit from our personal growth, but the work we do will benefit broader society in a more tangible way. Isn’t that always part of the humanities defending itself? If we can make our work accessible, we can shift the way our entire system works and prove the use of our work.

Week 3 – Mapping History

This weeks content has been the most exciting for me so far, and if I had to pick one digital history topic, is probably the one that I have been most interested in learning. Besides history/teaching, I have considered 2 other fields of study: library science, which my experiences in have already proven beneficial for this class; and geography. I have taken one geography class, in undergrad, and when I got a 99% in the course the professor told me I should go into GIS. His lesson on GIS intrigued me. Being able to map human activity and draw conclusions with the data always fascinated me, and now I am learning that I am able to merge that with history! It is a prime cross-section of my interest.

I was confused by QGIS at first, even as I followed the directions provided. I think my confusion merely arose from inexperience though; as I kept toying with the settings and doing the same steps over and over, I began to understand how the data was connected with each step. It is one thing to just follow the directions; it is another to understand WHY we are taking the steps. As I began to understand why each step was taken, I moved onto the Fairfax data. Once I understood that there were two SHP files (one with 3 districts and one that gave the precincts of 1 of those districts) I began to consider how I wanted to display the data listed.  

Each precinct lists a multitude of data: total ballots cast, total votes for each of 5 candidates, percentage of the total ballot earned for each candidate, etc. I began my project by incorporating the percentage of total votes earned for each candidate, split into 5 levels of percentage, with each candidate assigned a specific color based on party (Fig. 1). After creating 5 separate layers, one for each candidate, I realized that Barbara Comstock dominated the map because she was the top layer; this excluded the other candidates votes. I tried to exclude specific values on each layer, only showing the most votes each candidate got, but that left specific precincts value-less as they did not have an extreme majority of votes. I was left to ask, how can I show multiple layers of data?

Fig. 1 – Barbara Comstock layer, showing legend of each of the five candidates having a separate layer.

Rather than displaying multiple layers of data, I realized that I can change the filter of a new layer to display new data. I set up a new layer that displayed the winner of each district: while dominated by Barbara Comstock, it became clear which of the handful of districts John W. Foust won (Fig 2). While I was happy to find a way to display this data, I decided I wanted to go one step further. While my last layer displayed who won the election in each precinct, I was curious how close the election was in each precinct. In order to display this data, I edited the excel table which held the data I was working with. I calculated the difference between the total percentage of votes won by Foust and Comstock (I excluded the other 3 candidates, who accumulated about 10 percent of the vote between them across all precincts). I then set up a layer with the percentage different, and assigned blue and red to each end of the spectrum, with a purple center. Setting a graduated display, I was able to demonstrated which districts Comstock clearly won (dark red), which Foust clearly won (dark blue) and which was a much closer race (purple) (Fig. 3). I intend to continue working on the data, to see if I can find any other interesting patterns, but I am content that I was able to create a layer that displayed the closeness of the race. One purpose for this specific kind of map could be for political parties to know where to campaign in the next election. If a district is solidly red, the Republican party would need to spend less money campaigning there than in a district that was purple, where they barely scraped out a majority.

Fig. 2 – Winner by district, by distinctive majority
Fig. 3 – Winner by district, shaded by size of majority vote

I will briefly touch on some of the readings and HGIS examples that we looked at this week as well. One of the most succinct quotes to summarize the use of GIS in history was found in the chapter “Using GIS to Visualise Historical Data”: “The use of maps, therefore, presents a challenge to the historian, as it demonstrates the patterns within the data and challenges him or her to explain them.”[1] This shifted my perception of mapping, as previously I had considered it to be visual statistics laid out on a map. However, in statistics you manipulate the data and run tests to prove cause-and-effect. GIS, on the other hand, is just adding another layer of data to a regular table: geography. This lets historians notice the patterns within the data without having to run a statistical analysis by associating geographic area with a number. But ultimately, pure statistics lacks the context that is at the core of historians’ work: it can provide a “what?” but not a “why?”. I think that emphasizes the usefulness of GIS in historical analysis though. HGIS as a tool can provide a framework for a historical question, guiding historians to answer questions that they draw from noticing patterns in data.

Each of the databases we looked at this week demonstrates a unique way to approach GIS. In ORBIS: The Stanford Geospatial Network Model of the Roman World, you have the option to shift the routes to be reflective of the “cost” of getting to that location (based on distance, time, or expense). This shows common patterns that redefine what geographical distance means. This is pretty common today’s world, where people measure distance by time or cost based on mileage, drive times, or plane ticket costs. However, at least in my experience, people tend to think about ancient travel in terms of distance or exaggerated time. To be able to identify how quickly Romans could travel opens the door to questions of trade, expansion, and comparisons to the modern world.

The Smithsonian map of “The Decisive Moments in the Battle of Gettysburg” introduces a unique feature that displays the view of the Confederate Army during the battle. With a normal map of the battle, it is simple to see how the armies marched up to each other and could see each other across the field. However, I have never considered the ability of the armies to be able to construct that map at the time. Obviously each army would know the formation of their own forces, but knowledge of the enemy is based on what scouts and other soldiers were able to see. Utilizing this database could provide evidence for why specific actions were taken, based on available knowledge. It adds a new layer of analysis to the battle.

This post is already too long, so I will cut it off here, but I have been very invested in this week’s content. The mapping of Nazi POW camps in the USSR, the Bankes expedition files, and the mapping of urban Tokyo also each demonstrated a specific talent for how to utilize GIS and the ways that it can contribute to a deeper analysis of historical research.


[1] “Using GIS to Visualize Historical Data,” 90.

Citations, Statistics, and Transnational History

Even two weeks in, this class is already making me reflect on my own actions as a historian. Blaney and Siefring’s A Culture of non-citation led me to reflect on my own work and I looked back on my thesis to see how I cited things. Despite the fact that I got all of my journal articles from ProQuest and JSTOR, I did not cite a single one as an online source. I think the reason is a mixture of the reasons respondents gave in the article: lengthy URLs, insufficient citation generators on the site, and a lack of training to do otherwise.1 Footnotes where I cited multiple articles would have become insufferable if I provided the lengthy URL associated with each document, and I used up plenty of space as is. The citation generators on most sites (in my experience) have some kind of formatting issue and are usually inefficient to even bother with. Finally, I was never really trained in how to cite online sources. The default answer to how to cite is “check the style manual” but we all tended to follow the example of our professors, who all cited text copies, when inevitably they had to have looked at things online.

Ironically, I did cite a handful of sources that I accessed online with URLs. One was a PDF of COINTELPRO files that my advisor sent me, which I located myself and cited with the URL. While I know the real copy existed, I suppose I cited the information because I solely accessed it online. One was a website that was a personal project where someone tracked election results throughout history in Ohio counties; I cited that site because the information was compiled in a specific way on that site. And finally, I cited a school record that I am sure is buried in the archives somewhere, but that I was only able to find online; again, I only accessed the online copy so I cited the online copy.

I’m not sure, then, why I would not cite journal articles that I accessed online as online sources. How does that differ from the COINTELPRO files, which I know also exist in paper form but I didn’t see in person? And while citation issues may seem minor compared to complex historiographical and methodological issues, I am still hung up on this idea that we are cheapening digital history and the labor of those who provide us the access.

On to the hands on work this week. I have always had an interest in statistics, and I do believe that quantitative analysis has an important role in history that is undervalued. However, with the breadth of other methodologies that historians need to learn, I understand why quantitative analysis is not typically included, especially after using OpenRefine. I definitely struggled with the program and manipulating the data. I think for me it was more about not understanding the platform and coding commands rather than what steps to take. I was able to logically understand what steps I should take, but not within the constraints of the program. I managed to find various filters and facets that could exclude the NA values, combine duplicate names, and filter the dates. However, I couldn’t find a way to efficiently exclude dates before 1533 and exclude dates after 1665; the filter only went by the decade, so I had to manually calculate those exclusions to get my final count. There are a few errors that I think I made that I could definitely resolve by becoming more familiar with the program and learning how to manipulate the rows and values, but I think I came close at least; unfortunately, in a real study, close is not good enough.

Prior to this class, I had used Tropy a minimal amount when testing out other source organizing software (like Zotero for secondary sources). I was pleasantly surprised to find that Tropy (as well as Zotero, which I knew) was developed by the Center for History and New Media here at GMU, adding to my excitement to be here. After toying around with Tropy, I am excited to use it for my projects going forward. Even without being able to search the text of the images/PDFs themselves, Tropy is a great way to quickly access information you know is in your collection. Creating tags and metadata is done by hand but can be applied to wide swaths of sources at the same time, making it easy to categorize massive quantities of sources and make them easily filterable. Having one software to organize the hundreds of photos and PDF documents that I have from past projects is going to be infinitely easier than parsing through individual files and opening each one to double check where the information is at.

To close out, I wanted to also briefly mention that Putnam’s article about transnational history has me reconsidering the projects that I could complete during my time at GMU. Simultaneously while reading this article, I was reading a book for my Progressive Era class that involved a Black man passing for Mexican (and assorted other races) across transnational lines in the Gilded Age. I became so intrigued by the reality of transnational existence while reading both. When I consider my own life, it becomes clear that news from other countries effects the way I live my life, or at least how I think. The arrival of Covid, the international response, the looming threat of war with North Korea, etc. Even without international travel, the reality of life (especially in a time of globalization) is that transnational history will have to become more of the norm.

Week 1 Blog Post

This week’s work has been both a bit intimidating and extremely exciting. As my first week in a PhD program, I came in with a bit of fear for the workload, but having experienced it a bit I have realized that while its as heavy as I anticipated, I will be able to handle it.

The most intimidating thing regarding this week was the creation of the website and the sheer influx of new programs and platforms I have been introduced to. I began my undergraduate career as an education major, and I remember being introduced to WordPress briefly during that time, but I remember nothing about how to design a website. I have been toying around with all the different settings and learning to manipulate the pages, but it is a huge learning curve because of how many options are available. At the same time, we were introduced to Slack and Basecamp. I have not explored Basecamp much yet, but Slack reminds me a lot of Discord, so I am not as intimidated by that one. While it has been a lot of information to absorb, I know that I will figure it out in the next few weeks.

On the other hand, the content itself has been almost inspirational to me. I applied to GMU because of the digital history component of the program, and I am thrilled to have the chance to participate and potentially help define the “emerging” field. Having reviewed the past recipients of the Roy Rosenzweig Prize, I am inspired to begin my own digital project to work on. Looking over the various projects broadened my perspective of what digital history could entail. From the broad blog of Black Perspectives to the virtual tours in Virtual Angkor to the Digital Archive: International History Declassified, it is clear that digital history is not just a direct inquiry into a question; it is about sharing information and providing new perspectives and resources for those who would consume the project. In particular I was interested in American Panorama: An Atlas of United States History. Learning GIS is a goal of mine, because I believe that mapping would be an incredible way to visualize the reality of social history. Having studied mostly the Midwest, it is common to find many historical accounts overlooking the importance of the people in this region, but mapping could help draw attention visually to the significance of typically overshadowed events on the center of the country.

While I was anxious coming into this week, nothing set my mind at ease more than reading through Milligan’s History in the Age of Abundance?. Although I was worried about the workload of reading after having taken a year off from being in the habit, I found his text to be so fascinating and easy to digest that it renewed my vigor for academia. I was familiar with a number of issues that he brought up, which I have encountered before while taking classes regarding archives where I learned about the digitization of history and the ethics of record keeping. Milligan insists that historians are entering a new phase of our field where we will be responsible for interacting with records in ways that that typically fallen on archivists (24), and I agree. The sheer abundance of material being created in the modern era is too much for only archivists to organize for us; we need to be able to sort the information we encounter for our own. While it adds more difficulty to the process of research, I believe that it will be immeasurably beneficial. Who knows what kinds of questions historians will concoct when we are the ones organizing the materials we are accessing, free of the implicit bias of the archivists we rely on? Milligan reminds us that no archive is complete, and all archives are the result of biases and gaps (16-19, 21). And while archivists are highly under-appreciated, they are also the gatekeepers of knowledge that we access. Historians now have the opportunity to create their own methods of organization that could lead to new perspectives and inquiries that were not considered before.

One of Milligan’s compelling points in my opinion comes towards the end of his conclusion, where he critiques the field of history. He points out that geography is still the dominant way to classify the field of history, especially in job postings, and that digital history usually falls into a “bonus” role (238). He also points out that history is unlike other social sciences or humanities fields, in that historians are hardly taught how to work with quantitative data and rely heavily on publishing solo monographs rather than co-authored projects(237-238). These points intrinsically disadvantage the potential of digital history, which could redefine the way that history is produced. Milligan closes out his argument by saying that the field of history is defined by those within it, not by some external force, and it is therefore our responsibility to change the way that the field operates (242-243). I find that a very inspirational call to action and a defining characteristic of my new career goals.

css.php