Digital social networks (DSNs) produce data that is of great scientific value. They have allowed researchers to study the flow of information, the structure of society and major political events (e.g., the Arab Spring) quantitatively at scale.
Owing to its simplicity, size and openness, Twitter is among the most popular DSNs used for scientific research. On the Twitter platform users generate data by tweeting a stream of 140 character (or less) messages. To consume content users follow each other. Following is a one-way interaction, and for this reason Twitter is regarded as an interest network (Gupta, 2013). By default, Twitter is entirely public, and there are no requirements for users to enter personal information.
The lack of reliable (or usually any) demographic data is a major criticism of the usefulness of Twitter for research purposes. Enriching Twitter accounts with demographic information (e.g., age) would be valuable for scientific, industrial and governmental applications. Explicit examples include opinion polling, product evaluations and market research.
Our assumes that people who are close in age have similar interests as a result of age-related life events (e.g., education, child birth, marriage, employment, retirement, wealth changes). This is an example of the well-known homophily principle, which states that people with related attributes form similar ties (McPherson, 2001). For age inference in Twitter, we exploit that most Follows111we use capitalisation to indicate the Twitter specific usage of this word are indicative of a user’s interests. Putting things together, we arrive at our central hypothesis that (a) somebody follows what is interesting to them, (b) their interests are indicative of their age. Hence, we propose to infer somebody’s age based on what/whom they Follow. We created the artificial @williamockam account shown in Figure 1 to use as a running example to illustrate our method.
The contribution of this paper is the derivation of a probabilistic model that infers any Twitter user’s age only based on what/whom they Follow, which is not restricted by national and linguistic boundaries. Our model handles the high levels of noise in the data in a principled way and is massively scalable allowing us to infer the age of 700 million Twitter accounts with high accuracy. In addition we supply a new public dataset for use by researchers interested in the problem of attributing vertices in social networks.
There is a large body of excellent research on enhancing social data with demographic attributes. This includes work on gender (Burger, 2011), political affiliation (Conover, 2011; Pennacchiotti, 2011), location (Cheng, 2010) and ethnicity (Mislove, 2011; Chang, 2010; Pennacchiotti, 2011). Also of note is the work of Fang (2015) who focus on modelling the correlations between various demographic attributes.
Some of the most exciting recent work on detecting ages from social data has been in the field of computer vision where age is determined from user images(Fu, 2010; Guo, 2008). However, computer vision methods are difficult to apply to Twitter data: few accounts have profile images and those that do are often inaccurate or of poor quality.
Following the seminal work of Schler (2006), the majority of research on age detection of Twitter users has focused on linguistic models of tweets (Nguyen, 2011; Rao, 2010; Al Zamal, 2012). Notably, Nguyen (2013)
developed a linguistic model for Dutch tweets that allows them to predict the age category (using logistic regression) of Twitter users who have tweeted more than ten times in Dutch. They performed a lexical analysis of Dutch language tweets and obtained ground truth through a labour intensive manual tagging process. The principal features were unigrams, assuming that older people use more positive language, fewer pronouns and longer sentences. They concluded that age prediction works well for young people, but that above the age of 30, language tends to homogenise.
In general, lexical approaches suffer from the concept of social age (DongNguyen, 2014). Social age is determined by life stage (married, children, employment etc.) rather than years since birth, and it has a strong affect on writing style. People often adapt their language to mimic the perceived social norm in a group. Additionally, tweet-based methods struggle to make predictions for Twitter users with low tweet counts. In practice, this is a major problem since we calculated that the median number of tweets for the 700m Twitter users in our data set is only 4 (the tweets field shown in Fig. 1 is available as account metadata for all accounts).
to estimate the age of Twitter users from the first name supplied in the free-textaccount name field (eg. William in Figure 1
). In their research, they use US social security data to generate probability distributions of birth years given age. They show that for some names age distributions are quite sharply peaked. A potential issue with this approach is that methods based on the “user name” field rely on knowledge of the user’s true first name and their country of birth(Oktay, 2014). In practice, this assumption is problematic since Twitter users often do not use their real names, and their country of birth is generally unknown.
Approaches to combine lexical and network features include Al Zamal (2012); Pennacchiotti (2011), who show that using the graph structure can improve performance at the expense of scalability. Kosinski (2013) used Facebook-Likes to predict a broad range of user attributes mined from 58,466 survey correspondents in the US. Their approach of solely using Facebook Likes as features for learning has the benefit of generalising readily to different locales. Culotta (2015) have applied a similar Follower based approach to Twitter to predict demographic attributes, however their approach of using aggregate distributions of website visitors as ground-truth is restricted to predicting the aggregate age of groups of users. Our work is inspired by the generality of the approaches of Kosinski (2013) and Culotta (2015)
, however our setting differs in two ways. We use data native to the Twitter ecosystem to generalise from a few examples to make individual predictions for the entire Twitter population. Secondly we do not make the assumption that our sample is an unbiased estimate of the Twitter population and we explicitly account for this bias to make good population predictions. For these reasons it is hard to get ground truth and careful probabilistic modelling is required to infer the age of arbitrary Twitter users.
Probabilistic Age Inference in Twitter
Our age inference method uses ground-truth labels (users who specify their age), which are then generalised to 700m accounts based on the shared interests, which we derive from Following patterns.
Data Collection and Ground-Truth Labels
To extract ground-truth labels we crawl the Twitter graph and download user descriptions. To do this we implemented a distributed Web crawler using Twitter access tokens mined through several consumer apps. To maximize data throughput while remaining within Twitter’s rate limits we built an asynchronous data mining system connected to an access token server using Python’s Twisted library Wysocki (2011).
Our crawl downloaded 700m user description fields. Fig. 1 shows the Twitter profile with associated metadata fields for the fictitious @williamockam account, which we use to illustrate our approach. We index the free-text description fields using Apache SOLR (Grainger, 2014) and search the index for REGular EXpression (REGEX) patterns that are indicative of age (e.g., the phrase: “I am a 22 year old” in Fig. 1) across Twitter’s four major languages (English, Spanish, French, Portuguese). For repeatability we include our REGEX code in the Appendix.
Twitter is ten years old and contains many out-of-date descriptions. To tackle the stale data problem we restricted the ground-truth to active accounts, defined to be accounts that had tweeted or Followed in the last three months (we do not have access to Twitter’s logs). This process discovered 133,000 active users who disclosed their age (i.e., of the 700m indexed accounts), which we use as “ground-truth” labels. For each of these we download every account that they Followed. Fig. 1 shows that @williamockam Follows 73 accounts and we downloaded each of their user IDs.
We use ten age categories with a higher resolution in younger ages where there is more labelled data. For our ground-truth data set, the age categories, number of accounts, relative frequency and average number of features per category are shown in Table 1.
|idx||age range||count||freq||mean features|
Applying REGEX matches to free-text fields inevitably leads to some false positives due to unanticipated character combinations when working with large data sets. In addition, many Twitter accounts, while correctly labelled, may not represent the interests of human beings. This can occur when accounts are controlled by machines (bots), accounts are set up to look authentic to distribute spam (spam accounts) or account passwords are hacked in order to sell authentic looking Followers.
To reduce the impact of spurious accounts on the model we note that (1) incorrectly labelled accounts can have a large effect on the model as they are distant in feature space from other members of the class / label (2) incorrectly labelled accounts that have a small effect on the model (eg. because they only follow one popular feature) do not matter much by definition. To measure the effect of each labelled account on the model we compute the Kullback-Leibler divergencebetween the full model and a model evaluated with one data point missing. Here, is the likelihood of the full, labelled data set, and is the likelihood of the model using the labelled data set minus the data point. This methodology identifies any accounts that have a particularly large impact on our predictive distribution. We flagged any training examples that were more than three median absolute deviations from the median score for manual inspection. This process excluded 246 accounts from our training data and examples are shown in Table 2. We also randomly sampled 100 data points from across the full ground-truth set and manually verified them by inspecting the descriptions, tweets and who / what they Follow.
|Handle||Twitter Description||REGEX age||Reason to Exclude|
|RIAMOpera||Opera at the Royal Irish … Presenting: Ormindo Jan 11…||11||An Irish Opera|
|TiaKeough13||My name Tia I’m 13 years old.||13||Hacked account|
|39yearoldvirgin||I’m 39 years old… if you’re a woman, I want to meet you.||39||Probably not 39|
|50Plushealths||Retired insurance Agent After 40 years of Services.||retired||Using reciprocation software|
|MrKRudd||Former PM of Australia… Proud granddad of Josie & McLean…||grandparent||Outlier. Former AUS PM|
For reproducibility we make an anonymised sample of the data and our code publicly available 222address temporarily removed for anonymity
. The data is in two parts: (1) A sparse bipartite adjacency matrix; (2) a vector of age category labels. This dataset was collected and cleaned according to the methodology described above and then down-sampled to give approximately equal numbers of labels in each of seven classes detailed in Table3. It includes only accounts that explicitly state an age (ie. no grandparents or retirees). The adjacency matrix is in the format of a standard (sparse) design matrix and includes only features that are Followed by at least 10 examples. The high level statistics of this network are described in Table 4.
Age Inference based on Follows
Given a set of 133,000 labelled data points (ground-truth, i.e., Twitter users who reveal their age) we wish to infer the age of the remaining 700m Twitter users. For this purpose, we define a set of features that can be extracted automatically. The features are based on the Following patterns of Twitter users. Once the features are defined, we propose a scalable probabilistic model for age inference.
Automatic Feature Selection
Our age inference exploits the hypothesis that someone’s interests are indicative of their age, and uses Twitter Follows as a proxy for interests. Therefore, the features of our model are the 103,722 Twitter accounts that are Followed by more than ten labelled accounts, which can be found automatically. Of the 73 accounts Followed by @williamockam, 8 had sufficient support to be included in our model. These were: Lord_Voldemort7, WaltDisneyWorld, Applebees, UniStudios, UniversalORL, HorrorNightsORL, HorrorNights and OlanRogers.
Table 5 shows the number of labelled accounts Following each @williamockam feature. The support is the number of labelled
Followers summed over all age categories, while Followers gives the total number of Followers (labelled and unlabelled). A general trend across all features (not only the ones relevant to @williamockam) is that the age distribution is peaked towards “younger” ages as not many older people reveal their age (we show this for the accounts with the highest support in our data set in the Appendix). To improve the predictive performance of the model in higher age categories we adapted our REGEX to search for grandparents and retirees. This augmented our training data with 176,748 people labelled as retired and 63,895 labelled as grandparents. In our ten-category model, retired people are added to the 65+ category. Grandparents are assigned a uniform distribution across the three oldest age categories, which roughly reflects the age distribution of grandparents in the US(UScensus, 2014)333 This value was used as the US is the largest Twitter country., such that we ended up with approximately 374,000 labelled accounts in our ground-truth data.
Probabilistic Model for Age Inference
We adopt a Bayesian classification paradigm as this provides a consistent framework to model the many causes of uncertainty (noisy labels, noisy features, survey estimates) encountered in the problem of age inference.
Our goal is to predict the age label of an arbitrary Twitter user with feature vector given the set of feature vectors and corresponding ground-truth age labels
. Within a Bayesian framework, we are therefore interested in the posterior predictive distribution
where is the prior distribution of Twitter user ages and the likelihood.
The prior is based on a survey of American internet users conducted by Duggan (2013). They identified a sample of 1,802 over-18-year olds (speaking either English or Spanish) using random cold calling and recorded their demographic information and use of social media. 288 of their respondents were Twitter users, which yields a small data set that we can use for the prior prior distributions of over 18s. For under 18s we inferred the corresponding values of the prior using US census data (UScensus, 2010), which leads to our categorical prior
is obtained as follows: For scalability we make the Naive Bayes assumption that the decision to Follow an account is independent given the age of the user. This yields the likelihood
where and indexes the features. means “user Follows feature account ”.444 We only consider cases where since the Twitter graph is sparse: In the full Twitter graph there are nodes with edges, which implies a density of , i.e., the default is to follow nobody. Hence, not following an account does not contain enough information to justify the additional computational cost.
We model the likelihood factors
, where is the number of features and there are 10 age categories indexed by . Since our labelled data is severely biased towards “younger” age categories we cannot simply learn multinomial distributions for each feature based on the relative frequencies of their followers (see Table 1). To smooth out noisy observations of less popular feature accounts we use a hierarchical Bayesian model with conjugate data-dependent Beta priors
on the Bernoulli parameters . We seek hyper-parameters of the prior , which do not have a large effect when ample data is available, but produce sensible distributions when it is not. To achieve this we set to be constant across all features (hence dropping the subscript) and proportional to the total number of observations in each age category (the count column in Table 1). We then set , where is the total number of Twitter users and is the number of Followers of feature (the Followers column of table 5
for @williamockam’s features). Then, the expected prior probability that userFollows account is i.e., it is constant across age classes and varies in proportion to the number of Followers across features. The effect of this procedure is to reduce the model confidence for features where data is limited. Due to conjugacy, the posterior distribution on
is also Beta distributed. Integrating outwe obtain
where is the number of labelled Twitter users in age category who Follow feature , which are given in Table 5 for the @williamockam features and is the number of Twitter users in category in the ground-truth (See Table 1). Performing this calculation yields the likelihoods for the @williamockam features shown in Table 6. We are now able to compute the predictive distribution in (1) to infer the age of an arbitrary Twitter user. The predictive distribution for @williamockam is shown in Figure 4 and is calculated by taking the product of the likelihoods from Table 6 with the prior (Equation (2)) and normalising.
The generative process in our model for the likelihood term in (1) is as follows.
Draw an age category
For each feature draw
For each account draw the Follows:
|12||12–13||14–15||16–17||18–24||25–34||35–44||45–64555both categories have the same features||65+|
|vlogger||child presenter||child singer||singer||metalcore band||hip hop duo||hip hop artist||evangelist||political journalist|
|minecraft gamer||YouTuber||child singer||metalcore band||rock band||boy band||rapper||evangelist||retired cyclist|
|internet personality||child actress||child singer||deathcore singer||rapper||boy band||history channel||evangelist||golf channel|
|vlogger||child actress||child singer||electronic band||computer game||comedian||record label||faith group||retired rugby player|
|gaming commentator||girl band||child singer||electronic band||rock band||adult actress||boxer||faith magazine||boxer|
In Table 7, we report the five features with the highest posterior age values of for each age category. The account descriptions are taken from the first line of the relevant Wikipedia page. The youngest Twitter users are characterised by an interest in internet celebrities and computer games players. Music genres are important in differentiating all age groups from 12–45. 25–34 year olds are in part marked by entities that saw greater prominence in the past. This group is also distinguished by an interest in pornographic actors. Age categories 45–54 and 55–64 have the same top five and are differentiated by their interest in religious topics. Users older than 65 are identifiable through an interest in certain sports and politics.
We demonstrate the viability of our model for age inference in huge social networks by applying it to 700m Twitter accounts. We conducted three experiments: (1) We compare our approach with the language-based model by Nguyen (2013), which can be considered the state of the art for age inference. (2) We compare our age inference results with the survey by Duggan (2013). (3) We assess the quality of our age inference on a 10% hold-out set of ground-truth labels and compare it with results obtained from inference based solely on the prior derived from census and survey data in Equation (2) for age prediction.
Comparison with Dutch Language Model
For comparison with the state-of-the-art work of Nguyen (2013)
based on linguistic features (Dutch tweets) we consider the performance of our model as a three-class classifier using the following age bands: under 18, 18–44 and 45+.
Fig. 9 lists the performance of our age inference algorithm on a 10% hold-out test set and the Dutch Language Model (DLM) proposed by Nguyen (2013). The corresponding performance statistics are shown in Table 9.
|Our Approach||DLM (Nguyen, 2013)|
Both methods perform equally well with a Micro F1 score of 0.86. The precision and recall show that the DLM approach is efficient, extracting information from only a small training set (support). This is because significant engineering work went into labelling and feature design. In contrast, our feature generation process is automatic and scalable. While we do not achieve the same performance for the lower age categories, for the oldest age category, our approach performs substantially better than the method byNguyen (2013), suggesting that a hybrid method could perform well. We leave this for future work.
The major advantages of our model to the state-of-the-art approach are twofold: First, we have applied our age inference to 700m Twitter users, as opposed to being limited to a sample of Dutch Twitter users with a relatively high number of Tweets. Second, generating our training set is fully automatic and relies only on Twitter data666Nguyen (2013) used additional Linkedin data for labelling, i.e., no manual labelling or verification is required.
Fig. 2 shows the areas under the receiver-operator characteristics (ROC) curves for our three-class model. The curves are generated by measuring the true positive and false positive rates for each class over a range of classification thresholds. A perfect classifier has an area under the curve (AUC) equal to one, while a completely random classifier follows the dashed line with an . Performance is excellent for classes under 18 and over 45, but weaker for 18–45 where training data was limited, which we note as an area for improvement in future work.
Comparison with Survey and Census Data
We report results on inferring the age of arbitrary Twitter users with the ten category model. Fig. 3 shows aggregate classification results for 700m Twitter accounts compared with expected counts based on survey data (S&C) Duggan (2013). Our model predicts that over 50% of Twitter users are between 18 and 35, i.e., the bias of the original training set has been removed due to the Bayesian treatment. It is likely that S&C under-represents young people as we did not factor in the increased rates of technology uptake amongst the younger people when converting census data.
In the following, we assess the quality of our age inference model (10 categories) on a 10% hold-out test data set.
Table 8 shows the performance statistics for this experiment. The majority of the test cases are in the younger age categories (due to the bias of young people revealing their age) and in older age categories (due to the inclusion of grandparents and retirees). Table 8 shows that the precision depends on the size of the data (e.g., predicting 25–44 year categories is hard) whereas the recall is fairly stable across all age categories.777Without the inclusion of grandparents and retirees in the training set, the predictive performance would rapidly drop off for ages greater than 35. Our model significantly outperforms an approach, which would only be based on the survey and census data (S&C), which we use as a prior. This highlights the ability of our model to adapt to the data it actually sees.
We proposed a probabilistic model for age inference in Twitter. The model exploits generic properties of Twitter users, e.g., whom/what they follow, which is indicative of their interests and, therefore, their age. Our model performs as well as the current state of the art for inferring the age of Twitter users without being limited to specific linguistic or engineered features. We have successfully applied our model to infer the age of 700 million Twitter users demonstrating the scalability of our approach.
Appendix A Appendix
Age Extraction Using REGEX Matching of Descriptions
We extracted user ages from the free text Twitter description using UNIX scripting REGEX matching tools. The exact REGEX strings are included in Listing 1. An initial run of the REGEX revealed some frequent false positives with terms like ’I feel like I am 80’ or ’I am more than 10’, which were manually corrected for in the final iteration.
The Most Popular Accounts Followed by Labelled Users
We split the Followers into ten age categories. Table 10 shows that general trends across features are that the age distribution is peaked towards “younger” ages and that not many older people reveal their age for the top features. The Followers column gives the total number of Followers of each feature across the Twitter network. There is a Pearson correlation of 0.86 between the support and the total Follower count for our data set.
The Most Discriminative Features in Each Category
For each feature we calculate the posterior probability of Following that feature given the user’s age. We sort the posteriors within each age category and present the accounts with the five highest values in Table11.
|Under 12-year olds|
|12–13 year olds|
|ivandorschner||child TV presenter||0.18||0.27||0.20||0.11||0.09||0.03||0.03||0.02||0.03||0.04|
|14–15 year olds|
|therealsavannah||child pop singer||0.10||0.18||0.27||0.21||0.12||0.02||0.01||0.03||0.03||0.03|
|jessicajarrell||child pop singer||0.12||0.21||0.26||0.24||0.10||0.02||0.01||0.01||0.01||0.01|
|TheDylanHolland||child R&B singer||0.12||0.22||0.26||0.24||0.11||0.02||0.01||0.01||0.01||0.01|
|16–17 year olds|
|18–24 year olds|
|25–34 year olds|
|icp||hip hop duo||0.02||0.04||0.05||0.09||0.19||0.37||0.09||0.04||0.05||0.05|
|35–44 year olds|
|djspooky||hip hop artist||0.01||0.02||0.03||0.02||0.04||0.15||0.45||0.14||0.06||0.08|
|HISTORYTV18||history TV channel||0.02||0.03||0.03||0.05||0.09||0.14||0.36||0.10||0.06||0.13|
|45–54 and 55–64-year olds (identical most-discriminant features)|
|People over 65|
|SkySportsGolf||golf TV channel||0.01||0.02||0.02||0.02||0.03||0.01||0.04||0.16||0.22||0.46|
|IamAustinHealey||retired rugby player||0.04||0.02||0.01||0.01||0.01||0.01||0.04||0.17||0.25||0.45|
- Al Zamal (2012) F. Al Zamal, W. Liu and D. Ruths Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors. In ICWSM, 2012
- Bishop (2006) C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
- Burger (2011) J. D. Burger, J. Henderson, G. Kim and G. Zarrella. Discriminating gender on Twitter. In EMNLP, 2011.
- Chang (2010) J. Chang, I. Rosenn, L. Backstrom and C. Marlow ePluribus: ethnicity on social networks. In ICWSM, 2010.
- Cheng (2010) Z. Cheng, J. Caverlee and K. Lee. You are where you tweet: a content-based approach to geo-locating Twitter users. In CIKM, 2010.
- Conover (2011) M. D. Conover, B. Gonçalves, J. Ratkiewicz, A. Flammini and F. Menczer. Predicting the political alignment of Twitter users. In PASSAT, 2011.
- Culotta (2015) A. Culotta, R. K. Nirmal and J. Cutler. Predicting the Demographics of Twitter Users from Website Traffic Data. In AAAI, 2015.
- Duggan (2013) M. Duggan and J. Brenner. The Demographics of Social Media Users—2012. Retrieved Sep 12 2015 from http://tinyurl.com/jk3v9tu
- Fang (2015) Q. Fang, J. Sang, C. Xu, and M. S. Hossain. Relational user attribute inference in social media. In IEEE Transactions on Multimedia, 17(7), 1031-1044. 2015.
- Fu (2010) Y. Fu, G. Guo and T. S. Huang. Age synthesis and estimation via faces: A survey. In Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(11), 1955-1976, 2010.
- Grainger (2014) T. Grainger and T. Potter. Solr in action. Manning Publications Co. Chicago, 2014
- Guo (2008) G. Guo, Y. Fu, C. R. Dyer and T. S. Huang. Image-based human age estimation by manifold learning and locally adjusted robust regression. In Image Processing, IEEE Transactions on, 17(7), 1178-1188, 2008.
- Gupta (2013) P. Gupta, A. Goel, J. Lin, A. Sharma, D. Wang and R. Zadeh. WTF: The Who to Follow Service at Twitter. In WWW, 2013.
- Kosinski (2013) M. Kosinski, D. Stillwell, and T. Graepel. Private Traits and Attributes are Predictable from Digital Records of Human Behavior. In PNAS, 110(15):5802–5805, 2013.
- Liu (2013) W. Liu and D. Ruths What’s in a name? using first names as features for gender inference in twitter. In AAAI Spring Symposium on Analyzing Microtext, 2013.
- McPherson (2001) M. McPherson, L. Smith-Lovin, and J. M. Cook. Birds of a Feather: Homophily in Social Networks. In Annual Review of Sociology, 27(1):415–444, 2001.
- DongNguyen (2014) T. Meder. Why Gender and Age Prediction from Tweets is Hard: Lessons from a Crowdsourcing Experiment. In ICCL, 2011.
- Mislove (2011) A. Mislove, S. Lehmann, and Y. Y. Ahn. Understanding the Demographics of Twitter Users. In ICWSM, 2011.
- Mohammady (2014) E. Mohammady and A. Culotta Using county demographics to infer attributes of twitter users. In ACL Joint Workshop on Social Dynamics and Personal Attributes in Social Media, 2014.
- Nguyen (2013) D. Nguyen, R. Gravel, D. Trieschnigg, and T. Meder. “How Old do You Think I am?” A Study of Language and Age in Twitter. In ICWSM, 2013.
D. Nguyen, A. Noah, A. Smith, and Carolyn P. Rose.
Author age prediction from text using linear regression.In LaTeCH, 2011.
- Oktay (2014) H. Oktay, A. Firat, and Z. Ertem. Demographic Breakdown of Twitter Users: An Analysis based on Names. In BIGDATA, 2014.
- Pennacchiotti (2011) M. Pennacchiotti and A. M. Popescu A machine learning approach to twitter user classification. In ICWSM, 2011.
- Rao (2010) D. Rao, D. Yarowsky, A. Shreevats, and M. Gupta. Classifying latent user attributes in Twitter. In SMUC, 2010.
- Schler (2006) J. Schler, M. Koppel, S. Argamon and J. W. Pennebaker Effects of age and gender on blogging. In AAAI-CAAW, 2006.
- Wysocki (2011) R. Wysocki and W. Zabierowski. Twisted Framework on Game Server Example. In CADSM, 2011.
- UScensus (2010) U.S. Census Bureau, 2010 Census. Profile of General Population and Housing Characteristics: 2010 Retrieved Sep 12, 2015 from http://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml
- UScensus (2014) U.S Census Bureau, American Community Survey, 2014 Grandparent Statistics Retrieved Nov 15, 2015 from http://www.statisticbrain.com/grandparent-statistics/