Tag Archives: CulturePlex

Math is the Path: Degree Distribution of the Prelims Network and Other Random Graphs

Over the past few weeks, I have been trying to learn the basics of statistical analysis of graphs, more specifically, complex networks. Here I must be honest, with high school level algebra as my only mathematical tool, trying to work through an article such as Albert and Barabási’s “Statistical mechanics of complex networks” is a daunting task. However, I have been able to get through the basic concepts and begin applying them to my work.

Something I found particularly interesting is the concept of Degree Distribution. As we already know, the degree of a node refers to the number of edges that are connected to that node, and not all of the nodes in a graph have the same degree. The distribution of degree in a network is “characterized by a distribution function P(k), which gives the probability that a randomly selected node has exactly k edges” (Albert, Barabási 2002). The degree distribution of a random graph, such as the variants of the Erdős–Rényi model, is a Poisson distribution, because “in a random graph the edges are placed randomly, the majority of nodes have approximately the same degree, close to the average degree <k> of the network” (Albert, Barabási 2002). However, is has been demonstrated that many real world graph’s degree distribution differ greatly from the Poisson distribution, instead, they demonstrate a power-law tail, P(k) ~ kγ ,and are called scale free networks (Albert, Barabási 2002).

Thinking about this, I became curious about the Preliminaries graph and its degree distribution. Preliminaries is a real-world network, its based on real information gathered from physical objects; however, the process of designing the schemas and relationships requires a fair amount of human manipulation. More about Preliminaries here, here, and here. So I decided to find the Prelims degree distribution and compare it with some random graphs. Now, as I said, my math skills are lacking a bit, something I plan on really working on over the next few years, but I do have what is takes to model  degree distribution using the Python modules NetworkX and Pylab. Here I will briefly describe my methodology(code) and the resulting plots.

First I decided to generate two random graphs and plot their degree distribution:

GNP Graph (Erdős–Rényi model):

This graph is generated by inputing the number of nodes in the graph and the probability that there is an edge between each pair of nodes. Because I wanted to imitate the node and edge count of the Prelims graph, I found a probability that would generate approximately 3464 edges. The formula for computing the probability is included in the code snippet used to generate the graph:

import pylab as pl
from networkx import *

### generate gnp_random_graph
### n = number of nodes
### m = expected number of edges

n = 1616
m = 3464

### p = probablity of edge creation
### m = p*n(n-1)
### 3464 = p*1616(1615)
### p = 0.0013272844312295006

p = 0.0013272844312295006
# generate graph
G = gnp_random_graph(n,p,directed=True)

# print basic stats
print ("Number of Nodes : %i" % (n))
print ("Number of Edges : %i" % (number_of_edges(G)))

# make a list of each node's degree
degree_list = list(G.degree().values())

# compute and print average node degree
print ("Avg. Node Degree: %f" %

# generate a list degree distribution
degree_hist = degree_histogram(G)
if len(degree_hist) < 15:
 print ("Degree Fequency List:")
 print ("Degree : # of Nodes")

# print the degree and number of nodes that have that degree
 for degree,number_of_nodes in enumerate(degree_hist):
   print ("%i : %i" % (degree,number_of_nodes))
 print ("Degree Frequency List Too Long to Print")

# generate x,y values for degree dist. scatterplot
x_list = []
y_list = []
for degree,num_of_nodes in enumerate(degree_hist):
 if num_of_nodes > 0:

# label the graph
pl.title('Degree Distribution\nGNP Graph')

# plot degree distribution

This script results in the terminal output:

Number of Nodes : 1616
Number of Edges : 3605
Avg. Node Degree: 4.461634
Degree Fequency List:
Degree : # of Nodes
0 : 17
1 : 76
2 : 201
3 : 284
4 : 299
5 : 259
6 : 218
7 : 119
8 : 82
9 : 32
10 : 19
11 : 7
12 : 2
13 : 1

And the following scatter plot:


As you can see this resembles a Poisson distribution, like this one taken from the WolframAlpha website:


Scale Free Random Graph:

Next I generated a random scale free graph. The script I used was very similar to the previous script, except I used a different graph generator with only the node count as a parameter and I set the tighter limits for the x and y axes:

n = 1616
G = scale_free_graph(n)

# set limits for the axes

This script results in the following terminal output:

Number of Nodes : 1616
Number of Edges : 3428
Avg. Node Degree: 4.242574
Degree Frequency Too Long to Print

And the scatter plot:


This plot resembles a power law tail, such as this one from WolframAlpha:


Scale free distributions are commonly plotted in using log-log plots, such as those used by Albert and Barabási in the previously mentioned article. To produce a log-log plot, you can simply change the Pylab scale to log, also for better visualization change the axes limits:

# set limits for the axes

# log-log plot

The random scale free graph plotted in log-log looks like this:


So, now that we have seen what a PNG random graph and a scale free random graph degree distribution looks like, let’s take a look at the degree distribution of the Preliminaries graph. Although I should be able to read the Prelims .gexf file with NetworkX, I was generating error after error, so I decided to simply use the Gephi scripting console to generate a .txt file with the degree of each node. This should have been fairly straightforward, but I found the formatting of the degree values in the .txt file to be extremely difficult to work with, so I wrote a fairly ugly script to clean up the data so I can process it and generate plots for degree distribution. Using the following script I was able to generate some plots:

import pylab as pl

def degree_distribution(degree_list):
 set up a dictionary with degree as key
 and frequency as value
 degree_dict = {}
 for degree in node_list:
   degree_dict[degree] += 1
 return degree_dict

f = open('prelims_degree.txt','r')
line = f.readline().split()

# clean up the data
clean_line = []
for degree in line:
 degree = list(degree)
 degree = ''.join([num for num in degree])

# make a list to be used in the in the-
# degree_distribution function
degree_list = []
for degree in clean_line:
   degree = int(degree)
 except ValueError:

# compute and print basic graph stats
# num. of nodes, egdes, and average degree
avg_node_degree = float(sum(degree_list))/len(degree_list)
print ("Number of Nodes : %i" % (len(degree_list)))
print ("Number of Edges : %f" %
print ("Avg. Node Degree: %f" %

# set up a dict with degree frequency values
degree_dict = degree_distribution(degree_list)

# generate x,y values for degree dist. scatterplot
x_list = []
y_list = []
for degree,frequency in degree_dict.items():

# label the graph
pl.title('Degree Distribution\nPrelims Graph')

# set limits for the axes

# plot degree distribution

Which generates the following scatter-plot:


This plot looks like it also has the power law tail, although I can’t be sure, and as you can see is quite similar to the random scale free graph’s degree distribution. Alternatively, if we plot the Prelims data in a log-log plot we generate the following image:


Once again we see that the Preliminaries graph’s degree distribution is quite similar to the random scale free graph. This leads me to believe that the Preliminaries graph is indeed scale free, as many real world networks are. What does this mean? Preliminaries is a ‘real’ network? If anything it further validates the study of early cultural production using a network based methodology, as we can see that the network we have generated for this study does indeed share characteristics with modern day networks, and thus provides a comparative methodology for analyzing network evolution throughout human history.


all the code for this post is available in this gist

Leave a Comment

Filed under Uncategorized

Winter Break @CulturePlex

Well it’s already the 20th of December, classes have been over for two weeks and things are a bit more relaxed around here at Western. Campus is strangely deserted and it’s hard to get that 4 p.m. cup of coffee that you know every grad student requires for survival. Everyone is closing up shop, mopping the floor and stepping out early. It’s that time of year.

What is dbrownbeta doing for vacations…drinking margaritas in Cabo…or perhaps a bit of SCUBA in Roatán—maybe mounting a quick roadtrip to Utah?

Not this year folks. Just sitting tight in Canada waiting for the snow.

I am taking advantage of this time to get a few things done. When school is in session it’s tough to get much real work done with the nonstop itinerary of classes and meetings and readings and speakers. Your job is grad school and grad school is your life. Does that make sense? So your job is really to live your life of a grad student. Confusing? Yes it confuses me as well.

Moving on to more technical and less ridiculous topics, I want to talk a bit about what I am doing this Winter Break. Let’s do this.

Finishing Up My CourseWork

Last weekend I finished my project for the class the Máquina cultural. Although the essay wasn’t my finest work, the models I made turned out quite nicely, and I got a chance to experiment with Gephi’s Geo Layout. And I got to add a new function to my Gephi/Python library.

The graph consisted of the metadata of a corpus of literary and critical texts laid out in Gephi. The majority of nodes were just standard nodes with normal attributes, however, the nodes that represented geographical locations (cities) were arranged based on their lat/long attributes using Gephi’s Geo Layout.

These geonodes were then fixed in place using my new fix_set function in combination with other functions from the Gephi/Python library. Then the other nodes were arranged around the geonodes using ForceAtlas 2.

Pretty neat huh?

CulturePlex Projects

I am also working on a few CulturePlex projects this Winter Break. I recently received the chance to help out with the Sylva project. The lab is getting ready to officially release Sylva to the public, and because one of the priorities of Sylva is ease of use, we want to provide comprehensive documentation. I am helping to develop the content of this documentation. We are working on creating three types of documentation: a user guide that describes all of the features of Sylva, a step by step tutorial to creating your first graph with Sylva, and a help menu with FAQs, solutions, etc.

We are also beginning work on a new period of the Preliminaries project. In this phase we will focus on the time period of 1643-1661 during the administration of Luis de Haro. We are particularly interested in this period because this is when Pedro Calderón de la Barca began to be published prolifically. In this case, the graph will be used not only for general network analysis, but also as a supplement to studies on the contemporary reception of Calderón’s work. We have just barely begun to assemble the first editions list for this phase, but we plan to have it finished before May.

Personal Projects

I have three personal projects this winter break: learn HTML/CSS, learn JavaScript for use in web pages and the Google Maps API, and build a personal web page. I started learning HTML last Sunday evening, and my colleague Roberto showed me on the Bootstrap on Wednesday. Bit by bit, my website is coming along:


It’s called xitōmatl and it will provide links to my social networking sites, descriptions of my projects (personal and CulturePlex) with their associated image galleries, my personal profile and CV, etc. Also, I plan on creating a page that focuses specifically on the research of New Spain. Here I will provide a variety of content supplemented with links to digitized rare New Spanish books, various websites useful in the study of New Spain, and a few resources for learning Classical Nahuatl (another project coming soon). xitōmatl is available at this Gist if you want to take a look. It’s still a bit sloppy (a bunch of style elements that need to be is a CSS), but you get the idea.

That’s it for today…time to get back to work.

Happy Holidays




Leave a Comment

Filed under Uncategorized

The Miracle

Hello and welcome back! It’s hard to believe that two weeks have already passed and it is time for another blog entry. Although Preliminaries is alive and well and still developing, I would like deviate a bit today and talk about another project that I started a few days ago called—for the current lack of a better title—Guadalupe.

Here Guadalupe is a reference to the beloved and famous Virgin de Guadalupe, a painting that, according to legend, miraculously appeared on the maguey cloak of  Nahautl speaking Juan Diego on December 9, 1531. As close as a painting can come to being a rock star, Our Lady of Guadalupe inspired thousands of copies during the colonial period of New Spain, copious amounts of literature, entire lines of merchandise, and fervent devotion amongst the Mexican people.

I first became interested in New Spanish painting because of a class I took last spring with Alena Robin, an art historian and Assistant Professor here at Western in the Hispanic Studies department. This fall I have been taking another one of her courses called “Migration and Ethnic Relations in Colonial Latin American Art” in which we spend a lot of time discussing ethnic interactions in the American colonies and its artistic expressions. The Virgen of Guadalupe fits quite well with this theme for a variety of reasons: the location of her shrine at Tepeyac (associated with the  goddesses Tonantzin), the fact that Juan Diego was of Mexica heritage, the strong base of Nahuatl language literature documenting the apparition, the appearance of a Nahautl speaking Virgin Mary etc. So I chose Guadalupe as the subject of my term paper and began to investigate.

What I am really interested in right now is how culture works as a process, how it is coded as information and spread throughout networks, and ways to model and evaluate large cultural data sets to better understand cultural phenomenon. The Virgin of Guadalupe fits perfectly within this model because of her long-term success in Mexican culture, the quantity of cultural production centered around the miracle, and the interesting transcultural implications presented by her mythology and cult. To get a better look at the Guadalupe phenomenon, I decided to gather a big data set from her colonial era production to try to do some network analysis in Gephi. Wait a minute, a big data set for a term project? How the heck am I gonna do that? Luckily, I was able to build upon the work of other digital humanists and put together the first phase of my project in a couple hours.

Using the CulturePlex’s Baroque Art Database, I quickly located around 700 paintings with a Guadeloupian theme. From there I simply downloaded the data in various .csv files, coded up a couple quick functions in Python to clean up the data and off I went. After playing around in Gephi for a few minutes I had a nice image of the data:


It makes for a pretty picture, but it doesn’t really tell us much other than the basic structure of the database. One thing you notice (if you have a magnifying glass) is the large number of anonymous paintings. A little frustrating for a social networks project, but it’s the nature of the beast: most colonial paintings were not signed. Oh I forgot to mention, the schema used for this part of the visualization was quite simple. All of the Guadalupian paintings are linked to the original, miraculous painting, and the painters are linked to the paintings they signed. That is why we see the strong central prescence of “The Miracle”.

As I said before this doesn’t tell us much, so I am currently working on improving the data set through the addition of published texts with a Guadalupian theme, manuscripts gathered from the historiography of New Spanish painting, and institutional affiliations. So far I have only added a few manuscripts described in Mina Ramírez Montes’ “En defensa de la pintura. Ciudad de México, 1753”, an article detailing 4 documents uncovered by historians that all refer to the 18th century New Spanish painters’ desire for improved working conditions. These documents provide a wealth of information for network analysis, as each one is signed by a variety of painters, prominent and otherwise. After adding the documents, we start to see some more interesting results:

Here we see distinct communities beginning to form and the bridges between them. First of all we notice the “halo” around the miracle. These are the anonymous painting from the Baroque art database. Further out we see paintings with known authorship, and their painter appearing in red. Also we see the various documents in green, and their associated groups of painters. Here we begin to notice two communities: the first is generated by the earlier documents dating to the 1720’s, the second by the later documents from the 1750’s. Also we can see the bridges that form between the two groups passing through Nicolás Enríquez and Jose de Ibarra. These results–however obvious they may be–are the kind of connections and communities I hope to detect within the larger data set once it has been assembled and the quantity of information makes it difficult to detect these patterns without computational tools.

That’s it for today folks. As always I would gladly answer and questions at dbrow52@uwo.ca or @dbrownbeta

Ramírez Montes, Mina. “En defensa de la pintura. Ciudad de México.” Anales del Instituto de Investigaciones Estéticas, Núm. 78 2001: 103-128. Print.


Leave a Comment

Filed under Uncategorized

Preliminary Analysis of the Preliminaries Project

Hello! Welcome back to the Preliminaries Project blog!

This week, as promised, I would like to give you all a bit more information about the project including the current status of the Preliminaries database and the methodology used in contructing the database. However, the primary focus of this entry will be on various techniques used to analyze the Preliminaries graph, due to the fact that I have spent the last few days trying to figure out how to do this. But first let me give you a bit of background about me.

My educational background is primarily literature and linguistics. I did my undergrad work at the University of Oregon, where I studied Spanish with a fair bit of Linguistics as a secondary focus. Last year, I started my graduate work as masters student here at Western studying Hispanic Literature. My first contact with using technological means to study literary topics came last spring in in Professor Suárez’s class about the Hispanic Baroque. As a class project we started building an early version of the Preliminaries database in Sylva. I ended up doing my final project on the social networks involved in the production of early editions of Don Quixote, and I haven’t looked back. Last summer, I began officially working here at the Culturplex Lab on the Preliminaries Project. So to make a long story short I am a rookie when it comes to digital humanities, computer modeling, and programming. This fall I have been taking a class that focuses on Python, a high level programming language that is popular amongst scientists of all types, and also a Coursera course about social network analysis. I am just learning how to use this technology, but I hope I can share some of this learning process with you and in the end maybe everyone will benefit. Okay enough about me…let’s get back to the project.

As I mentioned before the Prelims Project is ongoing, and although it isn’t 100% complete, the database is sufficiently devoloped to begin doing a bit of analysis. Currently the first editions list (Duque de Lerma, 1598-1618) constists of 330 editions, out of which I have been able to obtain 228 scanned copies of preliminary sections, approximately %70, which isn’t bad considering that these texts were published 400 years ago. Of these scans, around 120 have been entered into the database, producing a graph with 1612 nodes and 3472 relationships. Rendered in Gephi using the built in YifanHu’s Multlevel algorithm, colored for modularity, and sized for betweenness centrality, the graph looks like this:

This visualization is nice because you can see the general structure of the graph and the coloring gives you a good idea of the communities within the the network as a whole. However, the amount of information presented here is overwhelming, so I have been looking for some ways to control the visualization and the information on which it is based to allow for some detailed comparative analysis.

One of the nice features of Gephi is that it has a variety of built in filters to allow the user to limit the information that appears in the graph. Something that we are interested in regarding the Prelims Project is the community structures within the graph. Let’s use a filter to see the modules of various famous writers of the period:

First Miguel de Cervantes, author of Don Quixote

Then Lope de Vega, author of the Comedias

As you may imagine, this type of filtration is crucial for analysis. It allows us pick apart the graph and study the elements in a controlled and manageable fashion.

There is another type of subset within a graph called the Ego Network. These are based on direct conections between a node and its neighbors. Although Gephi also has an filter for Ego Networks, I encountered a small problem here: Gephi only allows filtering for up to three degrees of seperation. This presents a challenge with the Preliminaries graph due to the schema design for the database.

In order to establish a connection between the author and an edition there are two steps: Author->Obra, Obra->Edition. This is due to organizational/editorial concerns that I hope to address in the next blog. Furthermore, for the author to be related to the people involved in the approval, licensing, and publication of an edition, two more steps are required e.g Edition->Approval, Approval->Censor. Therefore to establish what I call a Publication Network, somewhat equivalent to an Ego Network, I need to be able to find neighbors for up to four degrees of seperation. Thankfully, Gephi includes a scripting console based on the Python programming language. Using functions based on the following patterns I am able to mimic the filtering abilities of Gephi and create a way to isolate and compare subsets of the graph in order to generate these Publication Networks:










It is also important to note that it is necessary to combine the subsets generated by these functions, which I have done using the following function “completelist”, and then to make sure there are no stray ‘NoneType’s or duplicates, which I have done with  “masterlist”:

Then, using the subsets generated here I can color and size the Publication Networks using the following functions:

I can also find the intersections of various Publication Networks using the following function:

Thus, using a very basic knowledge of Python I am able to manipulate the graph and compare any subsets of nodes that I would like.

An applied example of these functions would be the following:

Publication network of Bernardo de Balbuena, author of Grandeza mexicana: Red

Publication network of Juan de Torquemada, author of Monarquía Indian: Blue

Their intersecting Publication Networks: Yellow

That’s it for today folks. Over the next week and a half I hope to generate some definite results to talk about and some more refined visualizations using my newfound techie skills.

Hope to see you next time around. For more information you can always email me at: dbrow52@uwo.ca or follow me on twitter @dbrownbeta





Filed under Uncategorized