Sunday, November 22, 2015

Qualitative Data Analysis: Card Sorting

So you have some qualitative data -- maybe from interviews, maybe from an observation session -- and you want to do some data analysis. You know what you are looking for (i.e. causes of miscommunication or misunderstanding, resolution strategies), but have little to no hypothetical support for any themes amongst your data. One qualitative data analysis technique you can use is card sorting. 

This blog post is written in collaboration with my colleague Justin Smith; it is based on our experiences doing research in our research group (Developer Liberation Front) and my time at Microsoft Research this past summer working with Tom Zimmermann in the Empirical Software Engineering group and what we have found to be most efficient. For those who are interested in how to do a card sort, or how others do card sorting, I'm going to talk about how to do a card sort based on our experiences.

First thing's first: before you can do a card sort, even before you look at your data, you should have an idea of what it is you're looking for in your data. For example, with the study I am working on, I'm interested in what makes tool output difficult to interpret; more specifically I want to identify the areas where there is either miscommunication, risk of miscommunication, or misunderstanding and see what causes each.

  • Recommendation #1: Come up with some criteria that can be used when extracting quotes. Your criteria should be based on whatever it is you're searching for in the transcript. For example, with Justin's study, we wanted to find implicit and explicit questions developers need answered when resolving security vulnerabilities; therefore, one trivial criteria was that the text should be an explicit question posed by the developer. The more defined and specific what you're looking for is, the easier it will be to extract data for your card sort.
  • Recommendation #2: Have at least two people extract quotes (including yourself obviously); you should all be using the criteria you put together. This will help validate your criteria as well as increase the validity of the quotes you extract. If you're using two people, both should extract quotes from all transcripts; once that's done, the two of you should sit down and determine where you agree and disagree. Two is usually enough, but if you decide to use three people, we recommend having 2 people work on different sets of data (i.e. person 1 and 2 on a set, person 1 and 3 on a set...); once finished, each couple will work out their disagreements. This is something you will want to report when you attempt to publish your findings :) (it's called inter-rater reliability).

Once you have your set of quotes from your transcript, you need to put them on notecards. Advice from someone who's been there: although having physical, paper notecards is a necessity for a card sort, we highly recommend having an electronic copy of your quotes (I put mine in Excel).

For our first card sorts, we manually made physical note cards -- they look nice and the card sort itself was okay, but keeping track of themes and sub-themes during the card sort, and after, was not trivial and sometimes led to confusion and the need for back tracking.

A pointer regarding using spreadsheets to store your data:
  • Anything you would include on your notecard should be a column in the spreadsheet; for example, the columns in my spreadsheet are Participant (P1, P2...), Tool Being Used, and Quote. I also have a column for a unique identifier for each quote (Card Number) and the Emergent Themes from each round of the card sort. It might also be beneficial to include a Timestamp column; this way, you can have an approximate location in your media to find the quote if needed.

There are a number of advantages to having your quotes in a spreadsheet, specifically Excel:
  1. You don't have to worry (as much) about water or a random fire ruining your data. It's also harder to lose your data if it's electronically stored.
  2. Spreadsheets are searchable; paper notecards are not.
  3. Typically card sorts are often done in iterations, and you want to be able to report anything that happens (i.e. cards moving from one theme to another); this is MUCH easier if you can put the themes for each round of the card sort into a spreadsheet. 
  4. Organizing paper notecards can be tedious and error-prone (i.e. trying to find a card and messing up an entire pile); having an electronic copy you can easily organize your data. If you decide to organize your notecards, you at least know you have a proxy of how the data was in your electronic copy.
  5. Actually making notecards is a time consuming process, especially when making them by hand. Instead, if you have your quotes in an Excel spreadsheet, and label the columns as mentioned earlier, you can use Mail Merge to create your notecards electronically :). Details regarding how to do this can be found in the following supplementary blog post.

Now, it's time to complete the card sort. This can be done in one phase, however, we recommend doing it in multiple phases. Minimally, there's a phase 1 for preliminary sorting into themes and phase 2 for sorting each theme into high-level themes. This is particularly useful for large datasets where you can wind up with a large number of themes after the first phase; typically there are common themes amongst those separate themes, thereby warranting another phase of sorting. You may also want to include a validation phase once you have determined all the low level emergent themes (after phase 1). This phase is to ensure that all quotes have been sorted into the best possible theme.

  • Recommendation #1: Include others in the card sort process; this lessens the bias behind the themes you find (and from our experience helps come up with distinct themes with clear definitions, which is super important when working with qualitative data). One thing to be aware of is the more cooks in the kitchen, the more time it might take to complete (more potential for disagreement and need for discussion), so plan accordingly. For our most recent card sort, 2 hours was our max at one time so, with a little over 300 notecards, we did four 2-hour sessions. Previously I've had shorter sessions; the longer time for this study, we believe, is a product of the type of data we're working with (non-interview).
  • Recommendation #2: As you're doing you're card sort, keep track of important information as it changes (i.e. how you define your themes/sub-themes). Also, keep track of quotes that you and your sorters believe best represent each theme. Doing these things will make reporting your findings much easier. 

Once you've done all this, you're ready to start thinking about what your paper is going to look like and where the interesting stories are in your themes (the fun part). With that being said, good luck fellow qualitative researchers! :)

Thanks again to Justin for helping me put this together and the Developer Liberation Front and Tom for the experiences!

Creating Notecards using Microsoft Word Mail Merge

This blog post provides step-by-step instructions for creating notecards (possibly for a card sort) using Microsoft Excel and Word. Thanks to Tom Zimmermann of Microsoft Research who taught me this nifty trick :D.

This tutorial assumes you already have your data in a spreadsheet, as discussed in another blog post.


  1. Save your excel spreadsheet -- make sure your data columns are labeled.
  2. Open or create a Word template with the number of notecards you would like on one page. I created my own using the Create Table command in Word; either way, make sure you know what size each notecard is (you'll need it later).
  3. Once you have the template/document open, you will need to put in the various components of your notecard where you'd like them to be. As an example, my template I created based on my spreadsheet columns can be found here.
  4. Next, select the Mailings tab > Start Mail Merge Email Messages. It technically doesn't matter what you select here; I choose Email Messages because it makes life easier :).
  5. Next, select Select Recipients >  Use An Existing List...
    Browse to and select the spreadsheet with your data. If your quotes are in a workbook, make sure you select the correct sheet.
  6. Now you want to map the different "fields" in your document to the columns in your spreadsheet. This is done using the Insert Merge Field menu.
    The list that comes is populated with the columns in your spreadsheet -- insert each field into one notecard (i.e. replace <<participant>> with Participant merge field) then copy and paste all fields into each notecard. Once finished, it will look something like this:

  7. If you click Preview Results you can see what your notecards will look like. If you click it now, each notecard will have the same information on it. This is because you have to tell Mail Merge to go to the next record in the spreadsheet.

    To tell Word you want to go to the next record for each notecard, you need to add a rule; this is done by going to the Rules menu and selecting Next Record. You need the <<Next Record>> field on each notecard; it should look something like this:


    Now when you click Preview Results you should see different data for each notecard.

  8. Now you're ready to Finish & Merge!
    I typically select Edit Individual Documents... so I can make sure everything is copesthetic.
  9. The final step is optional, but recommended: because quotes vary in length, depending on the template you used for your notecards you may want to remove any extra space added for the short quotes due to the long quotes. It's a tedious process but helps save paper :).

Once you have your notecards ready, you can print, cut, then you're ready to get to sortin'! :D


Monday, November 16, 2015

IBM University Day 2015 - Women in Education and Research


I attended IBM University Day for the first time this past Friday. I didn't know what to expect. I had never been to this event, which apparently had been going on for some time now. Also, I've always heard (and sometimes felt) a stigma behind IBM that made me weary of an event they would have titled "Women in Education and Research". Aside from the fact that I've always associated IBM with conservative, older white men, as an African American woman I'm always looking for someone like me doing extraordinary thing at events like this and am most often disappointed. I just knew this event would be no different...except I was surprisingly and so thankfully wrong. Of course there were all women speakers, all with different backgrounds and areas of work. On top of this diversity, 4 of the 12 speakers were African American -- that's 33%! Compared to the ~3% of us that make up the entire tech industry, that's incredibly refreshing! Although not everyone built their career in a technical field, most of them came up in STEM which makes it even more refreshing. I always love to see women succeed but it is especially helpful for me to see the variety that was exhibited at this event.



Aside from attending out of curiosity, the event included a "poster session" (see me above at poster session :D). I use "" because I think I spent a total of 15 minutes at my poster where there was traffic in that area. That's the one complaint I have about the event -- if I hear poster session I'm thinking I'm gonna have some time to flex. I had almost none. Aside from that, however, the research I am currently working on require developers. And being IBM codes in Java, it seems fitting that I take the opportunity to meet folks and make connections I could use to recruit developers. Fortunately, despite the little to no time I had at my poster to explain my research, I was able to chat with some folks and get them interested in helping me out. I can't say I'd attend to "present" a poster again, however, it was not a total loss :).

As for the bulk of the event, there was a series of talks given by females with various backgrounds in education and research. The major take-aways I got from the series are:

  • Always be you; it’s okay to be different! From different comes change, and change is almost always good.
  • Nothing is set in stone; don’t be afraid to try different careers. Sometimes that’s what it takes find your passion/niche.
  • Adversity is almost inevitable, especially as the minority (women, AA, Latino/a); deal with it in stride and know you’re not alone.
  • Women make incredible contributions everyday - let’s keep the trend going and bring our contributions to the forefront where they deserve to be!

For those who are curious, or wanted to attend but missed it, here's the line up for your exploring pleasure :)

Fran O'Sullivan
IBM Senior State Executive for NC and General Manager of Systems Strategy and Operations


Fran O'Sullivan's talk focused on her history at IBM as a woman and lessons she learned along the way. This was a dominant theme among the talks. One interesting part of her timeline, which began in the 80s, is that the first women appeared in her managerial chain in 2014 -- two years ago. Unfortunately, this wasn't super surprising; especially for IBM. Another interesting story was the "Frank" story. One of her bosses called her Frank on a note; she went to his office and asked "Who's Frank?". Of course he was flustered by his mistake, but she made light of the situation. One piece of advice she had for the audience was not to take everything so serious. She ended her talk with call to action to get and keep women in STEM (see photo).



Dr. Tashni Ann Dubroy
President, Shaw University


Dr. Dubroy spoke on her background, experiences, and why she feels we need more women in education and research; more specifically, the need for more some in STEM. She was born in Jamaica, adding to the diversity I spoke of, and decided at a young age she wanted to study chemistry. She spoke on quite a few things I, as well as others, could relate to. For example, she spoke of her difficulties with chemistry when she took her first course and how someone told her she had a "mental block" that was preventing her learning. She eventually overcame this mental block, but I think this is something that happens in CS as well; there are mental blocks regarding concepts that seem difficult when really it's just a matter of relating what you're learning to something you know or understand. She also spoke on how a positive outlook leads to positive direction in life (even when you don't immediately realize it) and how being an "all arounder", or someone with various aspects to their background outside their main area, is a pro not a con.
Dr. Dubroy is also an entrepreneur (part of being an "all arounder" :D), having co-founded the Brilliant and Beautiful Foundation and a hair care line called Tea and Honey Blends. How cool is that?!




Dr. Terri Lomax
Executive Vice President, Discovery-Science-Technology at RTI (Research Triangle Institute)

The theme of Dr. Lomax's presentation was "change is good". I was sold before she began, but if I wasn't she was a great example of why change is good (i.e. not scary, typically for the better). She went through lots of changes in her journey to where she is today, but the most relevant that I think anyone considering getting their PhD should know, is her advisor horror story. She had an advisor that refused to be helpful; whether a personal problem her or just his way, it didn't benefit her. She changed advisors and completed her PhD with a supporting advisor. I know too many people who have had similar issues (and stuck with that advisor for much longer than I could have) -- so know, change is good. Often change, especially in the context of this example, can be the difference between you finishing your PhD in 5 years and 10! One of her changes even brought her to the wonderful NC State :). She also talked about her initiatives to make CS relatable and more appealing to younger audiences by having forensic weekends where they can "do science and meet people". 


Susan Kellogg
Associate Vice Chancellor and Deputy Chief Information Officer in ITS at UNC


Susan Kellogg's, as did the rest, spoke on her journey - more specifically she focused on why she chose academia and advice for career decision making. And she did so without any slides (which is ideal if done well -- and she did a good job). Two major points came from her talk: 1) Pay attention to the fine print and 2) Be true to you. There was an interesting story behind each of these pieces of advice, however, the be true to you was the one that stuck for me. Especially considering I myself am somewhat of a pariah in what I do for various reasons (being an African American woman, my love for tattoos, piercings, and fashion, etc.). Her story centered around her pants suits; yes, pants suits. She didn't realize it as she was doing it, but just by being herself she changed the culture of one of the companies in her career path. Coming into the company, she was the only female to wear pants suits -- rather than changing who she was or shying away from it, she owned it. Before she knew it, more women were wearing pants suits rather than skirts and dresses. Small wins.


Dr. Wanda Lester
Interim Dean of the School of Business,  NCCU

This one hit close to home, as Dr. Lester is from Tallahassee, Florida -- same city as the love of my life :). One of the more experienced speakers of the day, Dr. Lester spoke on her experiences as an African American woman building her career in a time when racial tensions were worse than they are right now (also without slides). Despite any changes, trials, and tribulations she encountered, she kept her head held high and spent many years in educations building her career. She spoke on the importance of mentoring, something I harp on regularly both on here and in person with others I meet. She talked about long-term mentors but she also brought up something I had never thought about, which she called "momentary mentors". These are people who may not always around as a mentor but have or will serve a specific purpose on your career path. Although I have always considered Dr. Bowring to be my mentor, as I think about what a momentary mentor is I know I've had those along the way and continue to meet more as I work towards my degree. 


Ana Biazetti
Chair of IBM NC TEC (North Carolina Technical Experts Council)

I was trying to make the most of the little bit of time I had at my poster, so I missed the first part of this talk.  Based on the portion I saw, there was the similar theme of here's my journey and advice I have based on my experiences. The first slide I saw was "how to be an effective technical leader" (which I took a photo of but the background makes it hard to read). Though I didn't get to hear her talk about it, I can see how her advice can apply to any career, such as collaboration outside your team and focusing on execution for impact.


Dr. Veena Misra
Director of NC State ASSIST

Dr. Misra is a 3 time NC State graduate (BS, MS, PhD) and is now a professor at NC State; one of those rare stores, like that of my co-advisor Sarah Heckman. She discussed with us her work in ASSIST, an NSF center for research on wearables and sensors. I actually got a chance to chat with one of the students working in the ASSIST center and I must say, pretty cool research going on :). She also focused specifically on gifts in her personal journey and challenges. A gift that stood out as something I can only agree with because I've myself experienced it, and that's unexpected opportunities. I take it as a general rule of thumb that unexpected opportunities (hell, any opportunity) should be taken advantage of, especially if they benefit your path or career. On the challenges side, one of the typical challenges discussed is work/life balance, which of course she mentioned. However, she also mentioned some challenges that I never would have thought of myself (but have experienced); those are "dealing with negative news" and "leading while being you". Both are important to your sanity in grad school -- negative news is inevitable so you want to be conscious of how you deal with it and as Susan also said, you never want to lose yourself in anything you do. Always be you, because you are awesome!


Dr. Donna Grant
Associate Professor/ CIS Department Chair, NCCU

Dr. Grant's talks may have been one of my favorites...she was energetic and focused her talk on her journey in STEM and how she learned to soar (something we all want to do). One of the most interesting facts she brought up in her talk is the fact that she got her PhD from DePaul University in 2007...and was the first AA person to ever do so. Not female. Person. This is similar to the PhD program in CS at NC State - the first African American woman to get her PhD was in 2006. For universities like NC State where the PhD program has been around for over 40 years this is craziness. Dr. Grant also discussed her journey from corporate to academia, where she used her corporate background to inform her lessons -- also one of my teaching philosophies I'm developing. Just one advantage to going into industry then back to academia :). She also mentioned quotes that inspire and motivate her, including the serenity prayer and a quote about fear of being powerful.


Dr. Alisha Malloy
Associate Professor, Former CIS Department Chair, NCCU

Sharing a time slot with Dr. Grant, Dr. Malloy (formerly holding the role Dr. Grant has) spoke on her experiences and how her military background informed and led to her decision to pursue a PhD and build a career in academia. Her desire to get a PhD and educate others came from her experience in the military, where she was 1 of 2 women and of course in the minority as an African American. She said something about this discrepancy in numbers and figured she could either complain about the change or be the change. She decided to be the change -- so she got her PhD and moved to academia where she could pay it forward. One of my favorite life philosophies: I got, so now I should give back.


Dr. Susan Rodger
Head of the NC Alice program,  Professor of the Practice, Duke

I've had the privilege of meeting Dr. Rodger and hearing her speak at previous events, so I already knew going in a good bit about her current work. She gave some insights into her background and how she got where she is today, giving advice along the way. One interesting piece of advice came from her discussion on how she chose to attend graduate school. She wasn't sure whether she wanted to go to graduate school or into industry after graduating undergrad, so she put in applications for both. When she decided that grad school was the route for her, she already had job offers. Rather than turning them down outright, she asked each about the opportunity for summer internships -- brilliant! I wish I would have thought of this but hopefully this piece of information will benefit someone :). She also spoke on some of her initiatives, including Alice In Schools and Notable Women in CS.


Dr. Rada Chirkova
Director of NC State STEED

Dr. Chirkova, a professor at NC State (CSC), didn't spend much time talking; however, she made some great points in the short presentation given. She spent some time talking about the STEED ( Science of Technologies for End-to-End Enablement of Data) group's research but spent most of her time talking about the people who have supported her since her start as a professor at NC State, both from the university and individuals. Although I'm a student, not a member of faculty, I can relate to the feeling of support she has felt. Not everyone is as helpful as others, but there are many people that are truly here to help. The major piece of advice she wanted to pass along was to not try to do everything yourself, something else I can relate to from experience. "Listen and delegate" were her exact words. Working for your PhD as well as what comes after can be stressful - the more help and support you can get the less stressed you'll be. 


Dr. Rachana Gupta
Associate Director of NC State ECE Senior Design

Dr. Gupta, another member of the NC State faculty (ECE), also gave a fairly short talk -- possibly because she was last and the event was already past time by the time she came up to bat. She spent most of her time talking about her research and work she does with the ECE Senior Design. She also discussed how to make yourself marketable to companies. One piece of advice she wanted to pass along at the end of her talk -- something I've said many times and heard from others -- is that the PhD really isn't for everyone. It's important to know you want to do the PhD and have some motivation to finish; it gets rough, so you need that intrinsic motivation to keep you going when times get rough (this I know from experience).



Wednesday, October 21, 2015

VL/HCC Day 3

I will state again, I love VL/HCC :)
If ever given the opportunity to go (if you do any human centric or visual language research), GO! It's valuable in so many ways...even the talks you think you're not interested in you end up loving. Like today's keynote...I thought, not another blocks talk...and then it happened. And I was glad I was there.


The keynote today was titled "Taking Stock of Blocks: Promises and Challenges of Blocks Programming Languages" given by Franklyn Turbak of Wellesley College Computer Science Department. I was going to write a post about it, but Felienne already did a great job, so check it out!

One thing I found interesting is that there is a long history of blocks-based programming. Here is the lineage discussed in the talk:

Blox (Glinert, 1986) - first time puzzle pieces used to represent code
LogoBlocks (Begel, 1996)
Alice (Pausch et al., 2001) - 3D animations; evolved from Python to drag and drop
PicoBlocks (Bonta, Silverman, et al., 2006) - microprocessor for robotics; passes the "Lucite Test" - imagine constructing out of physical blocks; also has extension language
Scratch (Resnick et al., 2007) - best of Alice and PicoBlocks
StarLogo TNG (Roque, Wendel, et al., 2007) - created OpenBlocks frameworks for other to build language
BYOB/Snap! (Harvey et al., 2008) - have first class functions
App Inventor Classic - clunky
Blockly (Fraser, 2012) - javascript based (embedded in web browser); mutators = edit blocks with other mini block language
App Inventor 2 (2013) - local variables, improved parameters
PencilCode (Bau 2013) - toggle between blocks and text in interesting way
Droplets (Bau 2014) - '' ''

Languages with physical blocks (cool!)
Robot Park
Tangible Kindergarten

As the conference comes to an end, I think about how grateful I am for the experiences I've had and continue to have as PhD student. I met and connected with one of my research/blogging idols (Felienne - we even had drinks together! :D), made some new unexpected connections with other researchers that I should definitely know (Mark Guzdial, Caitlin Kelleher,  Ronald Metoyer to name a few), and even made the realization that I've advanced in my field/area as I know more and more people that I encounter at these conferences (and they actually like my research!!). It's venues like VL/HCC where I feel like I get the most value as a researcher -- I'm walking away more ready and confident than I came. And that's alright :).

Favorites talks (from first session -- took second half to visit family :D):

A Syntax-Directed Keyboard Extension for Writing Source Code on Touchscreen Devices
Islam Almusaly and Ronald Metoyer
(screenshot)

Adapting Higher-order List Operators for Blocks Programming
Soojin Kim and Franklyn Turbak
PHOLOs - "Pseduo-Higher-Order Operators"

Hub Map: A new approach for visualizing traffic data sets with multi-attribute link data
Andrew Simmons, Iman Avazpour, Hai L. Vu, Rajesh Vasa
perfect for venue location (ATL known for traffic)


Interesting papers (missed the talk):
Natural Language Programming: Designing Effective Environments for Novices
Judith Good and Katherine Howland

A Principle Evaluation for a Principled Idea Garden
Will Jerigan, Amber Horvath, Michael Lee, Margaret Burnett, Taylor Cuilty, Sandeep Kuttal, Anicia Peters, Irwin Kwan, Faezeh Bahmani, Andrew Ko

Enabling Independent Learning of Programming Concepts through Programming Completion Puzzles
Kyle J. Harms, Noah Rowlett, Caitlin Kelleher






Tuesday, October 20, 2015

VL/HCC Day 2 - Keynote and other memorable talks

Another great day at VL/HCC :).

To start, there was an amazing keynote given by Georgia Tech's own Mark Guzdial, a CS Education legend, titled "Requirements for a Computing-Literate Society". The focus of the talk was the challenges while working towards a computing-literate society and how we can re-invent CS education to accomplish this goal. I must say, as an African American female from South Carolina, where CS education is almost non-existent in K-12 education and almost completely unrelatable beyond that, I could completely understand and relate to everything he said. Because I loved it SO much, I took some notes to share some of the major points.

WHY SHOULD WE CARE?
    Mark made a few great points as to why we should care about the advancement and spreading of CS education and knowledge. The two predominant ones (that for sure stood out) are that 1) CS is study of process/problem solving, which impacts everyone and 2) learning and understanding computer science provides the ability for people to express themselves in ways they couldn't without programs/automation.

WHAT ARE THE CHALLENGES TO ACCOMPLISHING THIS GOAL?
   One of the obvious and most talked about challenges is access to computer science courses or resources for learning CS and accommodations for diversity -- though there are initiatives, such as CODE2040 aimed at increasing diversity and access to computing resources, there is still a gap leading to lower participation. This is especially true for underrepresented minorities like myself.

Another challenge, which I never thought about, was what Mark called the "inverse lake wobegon effect" -- in other words, we think we know more than we do. This theory suggests that the way things are currently done, we only know the top half. The top half, which I would NOT include myself in, would be for example students with CS courses in high school. Those with access are most privileged meaning they are the ones that get noticed by universities or even at the university level getting noticed by professors and other students as being the "real deal" while people like me fall by the waist-side. Thank goodness for my mentor who wouldn't let that happen - #thepowerofmentors




The last challenge he discussed was the unanswered questions that policy-makers continue to ask. To segway into this discussion, Mark starts with a discussion of initiatives such as "Georgia Computes!" and CAITE that aim to inspire students to study CS. He also eludes to some of the differences across states that make it difficult to make concrete, uniform improvements -- these differences also relate to to unanswered questions that could affect how we improve and facilitate CS education for the masses. For example, states vary on opinions of AP CS courses (apparently to some, access to advanced education is considered 'elitist'), what CS is, and whether to require CS. From this, Mark notes other questions that are unanswered, and extremely relevant, such as are the CS requirements really CS. For this he used South Caroline as an example, which hit home for me. I was shocked when I heard that the SC curriculum requires CS for graduation. It was a few years back when I graduated and although I did take the one and only programming class offered by my high school, I knew nothing of a CS requirement or anything close to it. According to some of his findings, one of the questions that need to be answered is 'what kind of CS can we teach to everyone?' -- especially considering how important a high school degree is. We don't want to have people not graduating because they can't pass a CS course but at the same time, typing is not Computer Science >.>.



One of the more interesting findings presented is that although initiatives like "Georgia Computes!" seems to be effective overall, it has different affects on different minorities/populations. But why?? For example, when observing the affects for black in CS (graph above), there is almost a completely flat line, meaning there is no increase in participation. This all showcases the need for more research and exploration into why minorities, such as African Americans, choose to study CS and how we can get them engaged and keep them engaged. Fortunately, Mark had some ideas for that as well :)

WHAT ARE SOME POSSIBLE SOLUTIONS?




   Mark discussed two possible solutions, each of which take on a different aspect of the CS education problem.

1) The Role of Context - For many, the problem is that CS seems irrelevant to them or their lives. I can say for me, if I hadn't discovered I could fiddle with my MySpace page and that be considered  CS, I might have thought the same thing! But many are asking, how can computing be considered irrelevant? Easy. The context in which we teach/introduce CS is critical, especially for younger audiences who may have preconceived (incorrect) notions of what CS is. For example, some of the problems we are first confronted with when learning CS are tower of hanoi or fibonnacci sequence. How many of us can say that what we do now is at all relevant or related to either of these?? I know I sure can't. Increasing interest could be as simple as teaching in a relevant context (i.e. robots and digital media).
An example of this that he spoke about is called Glitch Game Testers, started by Betsy DiSalvo and Amy Bruckman. This program hires African American males as game testers and have them think about computing deeper than the curriculum. Turns out, all of them finished high school and over half continued on to take computing classes post-secondary. This further showcases the importance of making CS relatable and relevant.



2) Understanding CS Teachers Needs - An important foundation for CS education is of course the educators! Mark suggests that teachers need sense of identity, which takes the form of confidence in their ability, as well as a sense of community with role models to look up to (which I relate to completely). Disciplinary Commons is a group that was started to bring together CS teachers to talk about classes; it's not big, but it's a step in the right direction. There is also a need for more professional learning on how to teach CS; too often we assume that to teach someone about CS, you have to be damn near a software developer. But is that true? Mark posed this question, and it's another one of those things you don't think about until someone else says it. An example he used is that one of the most successful CS teachers actually focus less on coding and more on writing assignments. Because the goal is for students to learn and understand what CS is. If it's something their interested in, the rest will follow.

Although my research area isn't CS Education, I was extremely moved by this talk and I hope the work continues and gets the publicity and backing it needs to really make a difference for states like my home.


As I did yesterday, here are some of the more memorable talks from today (again, in my opinion :D):

Supporting Exploratory Data Analysis with Live Programming
Robert DeLine and Danyel Fisher
Related:
Tempe -- web app for live exploration and analysis of data

Tempe: Live Scripting for Live Data (short paper on technology from full paper above)
Robert DeLine, Danyel Fisher, Badrish Chandramouli, Jonathan Goldstein, Michael Barnett, James Terwilliger, and John Wernsing

Jeeves – A Visual Programming Environment for Mobile Experience Sampling
Daniel Rough, Aaron Quigley
replacing paper "diaries" and expensive or difficult apps with Jeeves (using visual programming)
great/engaging slide set!

Detecting Problematic Lookup Functions in Spreadsheets
Felienne Hermans, Efthimia Aivaloglou, Bas Jansen
discusses usage of and problems with lookup functions in Excel
<3 presentation style (less than 1 minute summary at end)

Interactive Visual Machine Learning in Spreadsheets
Advait Sarkar, Mateja Jamnik, Alan Blackwell and Martin Spott
BrainCel v0.2 - spreadsheets and visualizations to help end users use and understand machine learning

Extending Scratch: New Pathways into Programming 
Sayamindu Dasgupta, Shane Clements, Abdulrahman Y. Idlbi, Chris Willis-Ford and Mitchel Resnick
Scratch Extension System, toolkit for anyone to extend Scratch language and capabilities
Resources:
http://scratchx.org
http://wiki.scratch.mit.edu/wiki/Scratch_Extension_Protocol_(2.0)
Considerations:
maintaining low barrier to entry, consistency with other blocks/conventions, and right level of abstraction, choosing the right extension

Strengthening Collaborative Groups Through Art-Mediated Self-Expression
Mengyao Zhao, Yi Wang, David Redmiles
Building interpersonal relationships between local and remote team members via art with Doodled "Us" - collaborative doodle system

Understanding Triggers for Clarification Requests in Community-Based Software Help Forums
Nathaniel Hudson, Parmit K. Chilana, Xiaoyu Guo, Jason Day, and Edmund Liu
What causes people to ask clarifying questions to improve Q&A site experiences --> design interventions to make Q&A sites more efficient



I will try to post as much as I can tomorrow - it's my last day so I really want to visit family that's down here. One of the downsides of the PhD is you really don't get to see family and friends as much as you like so advantage I will take of this :)
Until next time!

Monday, October 19, 2015

VL/HCC 2015 - Atlanta, GA (GC & Day 1)

Whew!

After traveling to Houston for Grace Hopper (post to come soon about the happenings there), and now to ATL for VL/HCC (IEEE Symposium on Visual Languages and Human Centric Computing) I'm pretty exhausted. BUT not too exhausted to share some of the greatness that has gone down since I've gotten here for the conference :).

To start, the GC (Graduate Consortium for those who don't know) was amazing. I found myself comparing it to last year, and I can honestly say it just keeps getting better. Now, this may be because my research gets more refined, which makes it more suitable to feedback, but I feel like I really got some ideas that are gonna push me in the right direction. On top of the great feedback I got, I met and connected with some AMAZING PhD students at various stages of their career. Despite all having unique research interests and directions, we were all able to provide insights to improve each others' work (and even potentially reference each others' work).

Now, for the first day of the conference. The theme this year is "Computational Thinking and Computer Science Education"...one thing I love about VL/HCC is that it's a smaller venue, so it's a lot more personal. I walked in for the intro and first session and found a seat next to a friendly looking female...of course I asked if the seat was taken and then proceeded to introduce myself. Come to find out, I was sitting next to none other than Felienne Hermans of Delft University in the Netherlands! The cool thing about it is I've been virtually stalking her ever since she wrote a blog post about my first conference paper/presentation 2 years ago...and now I've met her face to face. And it feels awesome. The most awesome part is I got to talk to her about my dissertation research, and she loved the idea! We did some brainstorming and she even had some work she's going to pass my way related to it. Yet another boost of confidence for my dissertation research :).

It would have been hard to ruin my day after how it started; fortunately, I didn't have to worry about that. The day was filled with interesting talks related to computational thinking and computer science education. Although all the talks were great, here are some of my favorites from the day:

Tutorons: Generating Context-Relevant, On-DemandExplanations and Demonstrations of Online Code - context relevant explanations of code (browser add-on)
Andrew Head, Codanda Appachu, Marti A. Hearst, Bjorn Hartmann
Resources:
http://www.tutorons.com/


Codepourri: Creating Visual Coding TutorialsUsing A Volunteer Crowd Of Learners - crowdsourced visual tutorials for learning to program (using crowd of learners)
Mitchell Gordon and Philip J. Guo

Personality and Intrinsic Motivational Factors in EUP - model that predicts motivation based on personality profiles
Saeed Aghaee, Alan F. Blackwell, David Stillwell, Michal Kosinki
Terms:
1. bricoleurism - like to build things, tinkering with stuff
2. technophilia - love of tech/new tech
3. artistry - enjoy experimenting with creative ideas
Resources:
MyPersonality dataset
Big Five Inventory Personality Test
Related:
Facebook 'likes' to predict personality profiles
Computer better predictor of personality than people

Scientists Tell Stories About Seeking Help with Programming - "war stories" to determine help seeking behaviors of EU scientists [qualitative study]
Brian Frey and Carolyn Seaman

Facilitating Testing and Debugging of Markov Decision Processes with Interactive Visualizations
Sean McGregor, Hailey Buckingham, Thomas G. Dietterich, Rachel Houtman, Claire Montgomery, Ronald Metoyer
Resources:
MDPVis.github.io

A Study of Interactive Code Annotation for Access Control Vulnerabilities
Tyler Thomas, Bill Chu, Heather Lipford, Justin Smith, Emerson Murphy-Hill
Memorable Quote: "Ain't nobody wanna be hacked"

Codechella: program for interactive and collaborative tutoring/building mental models - simulate in-person help in an online platform
Philip J. Guo, Jeffery White, Renan Zanelatto
Resources:
Demos: pgbovine.net/rosetta
Live: pythontutor.com

Semantic Zooming of Code Change History
Youngseok Yoon and Brad A. Myers


That's all for now - I'll try to post again for day 2 and will definitely be posting about GHC soon. Until next time!







Friday, October 9, 2015

JGit Growing Pains

So, I've been using JGit (I believe a previous blog post talks about this) for about a year now...I need someone to explain to me why I'm just learning how the revert command works for the API!?

Let me break it down...JGit allows you to manipulate repositories from Java. Cool shit right? However, either from my lack of experience with JGit or my (at the time) noob-ish knowledge of Git, I am just learning that revert does not do what I would expect. Take the following code for example...

for (RevCommit rev: revisions){
if (ObjectId.toString(rev.getId()).equals(currentHash)){
git.revert().include(rev).call();
}
}

Now, as most developers do, I went to the internet to find out how to revert a repository from Java, and this is what I found. Not much more documentation than what you're looking at right here...As I could recall, the way revert worked was that you can revert your repository to any given revision -- because of this I made the silly assumption that the code above would revert the repository to the RevCommit I passed in. Except I was wrong. And I didn't realize I was wrong until the revision I was analyzing did not match up with the revision I was diffing in my code from a file add...

Today, one year later, I have been scouring the internet to find out how this sequence of method calls work together and how, specifically, the include(RevCommit)method works. Does include mean this is the revision I want to revert to? Does it mean I want to revert the changes I made at this revision to the previous revision? Do I need to use more than one include call to revert back more than one revision? All of these are questions I have that unfortunately the interwebs is doing not so great a job of answering.

For those curious, revert (in this context) actually works by reverting the changes made to the RevCommit passed in. So whatever revision gets passed in is reverted to the revision before...which meant, due to my misunderstanding, I was basically doing everything backwards. I'm almost embarrassed it took me so long to realize this problem; until I realize that if there was some clear documentation out there that told me this information I could have discovered this issue earlier (or prevented it all together). For those of you thinking "that couldn't have made that big of a difference"...once I got the revert process correct, I had to restructure the way my code works for some of the new detectors I implemented as well as to detect any sort of pattern removal from the repository.

So my word of advice? If you're using JGit for analyzing source code at each revision in a repository and care about at which revision source code was introduced, know that the revert command should always come AFTER analysis if you care at all about analysis and revisions matching up (which I do). This makes it sound intuitive...but is it? 

Hopefully this helps someone else struggling with this particular feature of JGit. Until next time!

Sunday, October 4, 2015

DLF Retreat - Day 3 and Retrospective

Welp, The first annual DLF Retreat has officially come to an end.

For our last and final day, we spent a few hours discussing each of our individual research endeavors in order to provide feedback to one another. A few interesting things came from this, for me anyways. One, I feel like I have a better understand of what others are doing, as well as what I am doing and how I might conduct my research. Something I've been told and am really starting to realize is true is that everyone's path to the PhD is different. Two, I realized how important, if not vital, it is to get other perspectives on the research you do; there are some things that others know or that others see that you might not. And finally, I feel more confident in the work I'm doing and the feasibility of doing big things with a small, simple idea.

After that, we headed back...but first made a much deserved stop at a local donut shop. I got a maple bacon donut and a Turtle Mocha Iced Coffee...yes, I said turtle. It was pretty damn good though it had nothing on the donut :). The donut shop also doubled as our place of reflection from the weekend. I think we all felt good about what we did and the ideas we came up with. Most of what we did we all thought went well, however, some points for improvement we discussed included having a shorter retreat, including new students in the mix (possibly having small more frequent retreats for the whole lab), and possibly even incorporating our significant others (who are obviously also affected by our decision to pursue a PhD). We actually had some productive conversations on the way back about work/life balance with significant others; this is something I personally always struggle with. My boyfriend is a little older than me and is much more ready to have kids than I am. If I wasn't getting my PhD I probably would have started a family by now but we decided together it would be best to wait (although he definitely makes it known that he wouldn't mind trying now). These type of decisions vary by relationship, but it's definitely an important aspect of the PhD to consider.

Now that I'm back home, part of me is hype about digging back into my research and going back into the lab with a new perspective, new ideas, and plans for moving forward...and part of me wants to sit and stare at a wall and do nothing for the rest of the evening. Maybe I can just put together a plan for next week so I feel productive :)

Either way, I think this weekend was a huge success. I can't wait to see the changes put into place. Now we just have to hope the rest of the lab doesn't kill us for switching shit up :P

B signing off. Until next time!


Saturday, October 3, 2015

DLF Retreat - Day 2

The first full day of the retreat (and last full day), day 2, and I'm thinking it won't be as difficult to convince people we did things after all :). 

The day started with pancakes...the best way to start any day. After breakfast, we got straight into it with some post-breakfast exercises. The first exercise was to think about the strengths and weaknesses of our lab. Now, this sounds easy enough...but the challenge is separating your strengths and weaknesses from that of the group. For example, one of my strengths is that I'm social so I make new connections easily. However, the same might not be true for everyone in the lab, therefore that wouldn't necessarily apply as a lab strength. This exercise was especially useful because although we were focusing on improving the lab, improving the lab means improving each of us individually. And being it's a small group (4 of us), it was easier to have ad-hoc discussions about each. It was also nice that those involved in this exercise are senior students in our lab, meaning they've been around for a bit and know the ins and outs of how the lab is currently run. 

The next exercise that followed almost directly from this one was looking at opportunities for our lab that we may be missing (based on location, resources, our skill sets, etc.) and threats that could prevent our lab from being world-renowned (and of course graduating students in a timely manner). This exercise was an interesting one; we found that the first go around it wasn't completely clear for all of us what exactly was expected. One of the weakness of our lab that a couple of us mentioned was the ability to speak up when you don't agree or understand something. We sat there for 5+ minutes as most of us wrote little to nothing. It wasn't until our advisor said "okay let's take a step back. there's not much writing going on here" for us to actually say "yeah I didn't get the question". Once that was out in the open, and clarification was made, the exercise went much more smoothly and we got some good discussions from it (and ideas for improvement). General advice for PhD students: Speak up! Especially when you don't agree or understand. Time is precious; no need to waste it being timid (or confused).

The last exercise for the day also followed directly from the others; we were told to think about the weaknesses we discussed earlier and brainstorm on ways to improve. Now, of course when you get a group of researchers together to come up with problems they're going to come up with a laundry list, so we had quite a few weaknesses (or shall I say "areas for improvement") that we came up with. Therefore, we each picked one weakness that we thought would be important to deal with and brainstormed on each. Of course, being the brilliant minds we are, we came up with some interesting ways to move forward in the lab and build both our lab brand and our individual brands. The test now is, will we carry all the things we discussed and decided on this weekend back and propagate them throughout the lab/group...only time will tell! And of course I will try to share some of the big changes we make as we make them.

This is B from the DLF, signing off. It's time to watch some COPS before hitting the sack.
In the mean time, enjoy some of the dope photos we took during our trip :)





Friday, October 2, 2015

The First Annual DLF Retreat - Day 1

I am spending the weekend at Topsail Beach, NC with my advisor and 2 other senior students in our lab. You may be asking yourself 1) why are you at the beach with your colleagues and 2) did you know there's a hurricane coming that way? Our fearless leader decided it would be a good idea for us all to spend a weekend together AWAY from the lab to brainstorm ideas for our research and future publications. I must say, we are off to a pretty great start. I will also add that the trip was planned wayyyy before anyone knew Joaquin was headed this way. :)

Today was the first day of the retreat; the weather was wet and gross, as it has been for the past 2-3 weeks, but we made the best of it. Our first stop on the way in was the Duplin Winery (gotta get the trip started right!). Duplin is a local wine, which I had heard of (and of course drank) but had no clue originated in NC. We got to taste various red and white wines for free -- and of course we had to buy some as well! We even got complimentary homemade crackers, which were actually pretty good. Before leaving the winery, in true researcher spirit, we had a brainstorming session. I shall not divulge the awesome ideas we came up with, but know the DLF is making moves ;).



Once we made it to our place of residence for the weekend, we ate some lunch (which our advisor so thoughtfully packed) and then continued to do our first official activity of the weekend. For this activity, we had to think about our career path (where we see ourselves in 5-10 years) and answer various questions about to our career path such as advantages and disadvantages, measures of success, and how our career path might evolve. Even if we aren't sure what we want to do for a career, the exercise helped us think about what it is we truly want to do and why. More concretely, it helped us think about how we can better prepare ourselves (and others in our lab) now for their future, whatever it holds. Retreat or not, I recommend this activity or anything similar to researchers/research groups. Even if the discussion is informal, it can be informative, thought-invoking and lead to more detailed discussions.

Now after all this heavy, of course we had to toss in some leisure; the rain gave us a break so we took full advantage. Despite the lack of sun, it was still nice out on the beach. We even got to see some surfers try to take advantage of some of the waves coming in!  After the beach, we had a lovely gourmet dinner prepared by Dr. E -- corn dogs, tater tots, and salad :P. So delicious!




We probably should have stopped there though, since once we continued exploring we found a post-apocalyptic arcade, complete with broken ceiling pieces on the floor, a closed "surf shop" where you are supposed to be able to rent fun water gear, and  an indoor pool and hot tub that was not only dark but surrounded by a moat of water. Condo = nice. Beach = wonderful. Everything else = meh.
After exploring the beach and the "resort" we're staying at, rather than trying to play arcade games around the broken and water damaged ceiling, we had some quality bonding time while watching COPS on Spike TV. Yes. COPS. Don't you love it? :)

Day 1 has come to an end...I'm pretty excited to see what day 2 brings. I'll try to post tomorrow on our festivities. Until next time! :D

Thursday, October 1, 2015

Tricorder: Building A Program Analysis Ecosystem

In this paper, authors provide an overview of the program analysis platform "Tricorder" that is being currently used in [Google](https://www.google.com/) for program analysis. They also present a set of guiding principles that went into creating the platform.

Article link

Abstract

"Static analysis tools help developers find bugs, improve code readability, and ensure consistent style across a project. However, these tools can be difficult to smoothly integrate with each other and into the developer workflow, particularly when scaling to large codebases. We present TRICORDER, a program analysis platform aimed at building a data-driven ecosystem around program analysis. We present a set of guiding principles for our program analysis tools and a scalable architecture for an analysis platform implementing these principles. We include an empirical, in-situ evaluation of the tool as it is used by developers across Google that shows the usefulness and impact of the platform."

Thoughts

Authors present their design decisions and lessons learned with the program analysis platform Tricorder.

TRICORDER ARCHITECTURE

The Tricorder platform accepts as input the code to be analyzed and outputs inline comments the code. The platform is a realized as a microservice architecture. Authors argue that thinking in terms of services encourages scalability, modularity, and resilience (in case of node failure). Tricorder is designed to such that the analysis workers are designed to be replicated and stateless in order to make the system robust and scalable. The results appear in the code review as comments (Google terms it as robot comments or robocomments for short).

GOOGLE PHILOSOPHY ON PROGRAM ANALYSIS

  1. No false positives
    "No" may be an over statement as admitted by the authors. The aim is to reduce the number of false positives to the minimum. Authors also define a effective false-positives as a measure of number of reports that a user chose not to take any action.
  2. Empower users to contribute 
    The insight is to leverage the knowledge of masses to build a robust system. Google actively encourages users of Tricorder platform to write new analyzers.  Since the users (google employees) typically work with variety of programming languages and custom APIs, authors seek to leverage the vast domain knowledge of the users to write  analysers. The "Tricorder" platforms enforces quality by reserving the right to remove an analyser from the system based on performance (i.e. how many users are ignoring warnings, are the warnings annoying, is the analyzer taking too many resources...). Authors argue that analyzer writers typically take pride in their work, thus the analyzers are generally of high quality. The issues reported against the analyzers are typically resolved fairly quickly.
  3. Make data-driven usability improvements 
    The idea is to avoid arbitrary design and implementation decisions and ground them on empirical evidence. Enhancement and correction to analyzers are made based on the user feedback. 
  4. Workflow integration is key
    The key idea is that the analysis should integrate with the users workflow rather than a user having to go out of way to perform analysis. For instance, a standalone analyser is less likely to be invoked by a developers in contrast to an analysis mechanism that integrates to his/her IDE.
  5. Project customization, not user customization 
    Past experiences at Google showed that allowing user specific customization caused discrepancies within and across teams, and resulted in declining usage of tools.  Authors observed that often times when a developer using a tool abandons project, team members often check in code containing warnings found by that tool. However, Tricorder platform offers limited project-based customization. For instance, a team may choose to run optional analysis by default and some analysis that are not applicable can be disabled for a team.

Evaluation

One thing to point out about this paper is that the focus is less on the technical aspects of the platform and more on the principles that went into developing Tricorder and how well the tool implements each. This was nice; the platform took years to develop and I'm sure there are various technical pieces. It would have been too much to try to go into technical detail, so kudos on finding a good balance. At a high level, the guiding principles presented by the Tricorder paper make sense; Some have been documented in existing literature, while others have been experimented with within Google. For example, false positives and workflow integration have been been discussed in literature as major deterrents for potential static analysis tool users. The authors give sufficient background and include references to the existing works, though some expected Google to take all the credit for the ideas presented :). 

Although there is an evaluation of the platform, it leaves something to be desired.
For example, they kept track of how often developers using the tool clicked NOT USEFUL or PLEASE FIX to determine the usability of the tool. However, the numbers for theses clicks per day are surprisingly low for a company the size of Google. If this tool is available to Google employees, do the low numbers mean people are not using the tool or they are just not clicking the buttons?
The work would benefit from a more in-depth evaluation, perhaps including some qualitative findings regarding how people use the platform and how often in comparison to other tools that are available or have been used in the past.

Closing Points

The publication that came from this work is overall an informative one that makes contributions to both the research and technical communities. Besides the fact that the paper is well written (which I should note, being unfortunately good writing is harder to come by now a days), there are a few others reasons why we believe this paper got accepted:

  1. The design guidelines for Tricorder are strong; again, some are not supported by existing literature. But there is support for each design decision made.
  2. There are design decisions at all (guidelines that can be replicated are always good).
  3. The project spans years of work with lots of data.
  4. It's Google!
  5. It's a relevant topic that people care about (and developers can relate to) in research and industry.

Overall, well done Google! :)

_______________________________________________________
Guest writer: Dr. Rahul Pandita, a postdoctoral researcher at NC State University in the Realsearch group advised by Dr. Laurie Williams.

Tuesday, September 8, 2015

So much to share, so little time!

Let me start by saying 'whew'...it has been a long past 2 weeks (not that that's an excuse for months of missing blog entries). Time for some (brief) updates on my journey up to now...

First, I'm finally out of my 2 year paper drought! Well, sort of. No first author full papers BUT I did get to collaborate on 3 accepted FSE (Foundations of Software Engineering) papers. Two papers were for the NIER (New Ideas and Emerging Results) track: a paper related to my dissertation on tools tailored to the developer and a paper on improving qualitative studies using social sites, such as Hacker News.
One, which I was second author on, was a full paper on developer information needs when using security tools.


The most exciting part is that I've been working on getting an FSE paper in for years now...and finally it happens. AND the conference was in Italy, which made it even better! :) Yes, I got to visit Bergamo, Italy for one week and it was great. The food was great, the people were great...everything was just, great. I may post some pictures...just been so laazzzzyyy since I made it back :D.

As usual, I met my three new people (and then some); the cool thing about attending a conference after years in the PhD is that you start to run into people you recognize -- and they recognize you! After working at MSR for the summer, I encountered a number of people I both worked with and met through working at MSR. Will forever be grateful for that experience.

Speaking of MSR, sadly my internship ended 2 weeks ago. My last update spoke of it being 4 weeks in...before I knew it, I was in week 12 trying to finalize data collection and analysis prior to my departure. If you're curious, we did go for an ICSE submission (fingers crossed)! I'll be able to give more details once I find out if we made it in. Don't hold your breath though...won't find out for a few months now :/. So now we wait...but I would be remiss if I didn't note, again, how great my time was at MSR. Working with Tom Zimmermann and Chris Bird and getting to know the researchers and research at Microsoft was truly a unique and encouraging experience.

Ironically, my time at MSR further validated my desire to work in academia. Though I really enjoyed doing research at MSR, my time there helped me realize that in academia, I can have the best of both worlds! While there, I met a number of professors visiting to do research or just to see what kind of research is being done in industry. I could definitely see myself teaching during the year and then doing something different during the summer with the cool kids at MSR :)

I know I said I would be brief, but I feel like I have so much to share. Those are the major points though - things have been a whirlwind on top of the fact that I missed my boyfriend's birthday while in Italy so I'll need to make that up to him at some point. But I'm living life and trying to find balance...now I'm working toward my Oral (finally) so I can try to really get my life started.

Hopefully I can get better at these blog posts so I don't have to brain dump every time - but that's all for now. Until next time... :)