Weekly Updates

Scroll down to see our team’s progress throughout the semester so far as we work through the problem space and witness the challenge evolving. The most recent posts are first.

July – Summer Update 2

Throughout the month of July the focused team continued the interviewing process with experts from each Joint Strategic Plan Goal. These experts provided valuable direction as to which indicators would be best suited to refine our product. Between these interviews and direction from the Center for Analytics, we were able to construct a final version of our design that is outlined in the canva page hyperlinked here.

June – Summer Update 1

Our team was approached by previous instructors, X-labs, and our problem sponsor with an opportunity to continue our work into the summer, and although most of our team was comprised of seniors and graduating in Spring 2020, we still had two members that were continuting education at JMU in the fall. Grace and Jack accepted the challenge to continue the work and were joined by Madison Hince, a Computer Science/Intelligence Analysis double major. Madison will be helping the team refine their deliverable and adapt it to the evolving needs of our client. 

Week 9   4/12-4/18

During week 9, the team continued to develop our second iteration of our MVP after receiving feedback from our peers and instructors. We have decided to expand the list of countries and include 2 other regions in the MVP, so we are now analyzing countries in Southeast Europe, the Middle East, and North Africa.  We are analyzing our first prototype to see if any indicators cause outliers, as well as continuing to expand our process for our final MVP product. All of our data used in this matrix is currently open-source, so if our MVP is implemented, the Department of State may opt to use different classified data sources. This being said, our main goal is to present a proof of concept to the department. We are developing a model for Goal 2 of the JSP, which combined with Goal 1’s model would account for 50% of our final algorithm.

Week 8   4/5-4/11

During week 8, our team primarily focused on developing our second iteration of our MVP, in which we implemented a new data collection and standardization method. The primary differences between our previous MVP and our newest iteration are firstly, that we have a richer selection of datasets underlying our analysis, and, secondly, that the distribution of the data is no longer skewed by our scoring method.

Our weekly “Ah-ha” moment was realizing Tableau will serve our visualization needs much better than Google Sheets.

Week 7   3/29-4/4

Finishing out week 7 the team had conducted a few more interviews and have further narrowed down our final goals for our MVP. After conversing with employees from the Office of Management Strategy and SOlutions, we learned that the JSP is hard to implement when applying it to smaller regional/functional branch locations. This department advised that they are lookign for an easier and more effectice way to transfer staffing resources and that they are currrently using a tool called Global Preserves Mitigator, which attempts to collect all departmental datasets and combine them to quickly visualize data. Looking forward to the next few weeks we hope to make use of this tool and potentially borrow some of its functionality. 

Our “Ah-ha” moment for this week was that we had our previous thought process reinforced when our interviewess advised us to match the strategic goals with resources and indicators. 

Week 6   3/22-3/28

During week 6 the team was finally able to regain some traction and uniformity as we continued with our interviews. We reached out to a former Secret Service and Department of State employee who offered a plethora of valuable insight. Often times when the military leaves a certain area or region, much of the resources from other departments follow suit. This offers a new indicator of military involvement in these regions and how the direction of military placement will change resource allocation. We also learned that when countries begin to implement systems other than democracy, the US traditional reacts swiftly in the opposite direction. Because of this, we will also need an indicator to monitor directions of foreign political systems.

Our biggest weekly takeaway was that we have the ability to request USAID reports for whichever regions or countries we choose to test our tool with.

 

Week 5   3/15-3/21

Although we did expereince some difficult in reaching contacts in the State Department, we were able to make contact with individuals from Tuvli, an innovative technology company that supports Homeland Security, the Department of Defense, and federal civilian agencies. During this interview, the team was provided information about how Tuvli uses a “Five Factor Index” to support their reasoning for allocating resources to select countries.  They also mentioned how Tuvli’s views and data are not the final say, and that resource allocation is like a recipe and their data is only a single ingredient.

Week 4   3/8-3/14

During week 4, the team did not meet as regularly due to the restructuring of the class in response to the Coronavirus. We continued to conduct interviews when possible and were able to formulate a tentative MVP as well as make a few new realizations. One of the most important realizations was learning that as of now, the Department of State primarily looks at the previous 10 years of resource allocation when considering where to send the next year’s resources. This pointed us in a new direction of identifying indicators for our MVP, and helped us to create the first working model of an interactive mapping system in which the user can locate information and rankings of countries from a glimpse of the map. 

Week 3   3/1-3/7

 In week 3, we continued with our interviewing process. We reached out to a senior advisor for management and innovation who previously served as a foreign service officer for 10 years. They advised us that the U.S. embassy presence across the world is very uniform, and that there may not be a need to have it so standardized. This got the team thinking that this algorithm we are attempting to design could be used in more fields than just resource allocation. Our interviewees also pointed out that it may be difficult to apply broad indicators across the board at a global level, and that we may need to evaluate ways to use a more focused approach. We have identified our preliminary indicators to be population growth rate, economic growth rate, and other factors as laid out in the JSP.

An “Ah-ha” moment from this week was when the team realized that one of the primary indicators we should keep in mind is how well foreign countries that are seeking aid meet the objectives laid out in the JSP.

Week 2   2/23-2/29

During this second week of working as a team, we reached out to our primary problem sponsors. During this phone call, the team learned that there were many more factors in the problem space than we had previously considered, and that we would definitely have our work cut out for us. Our sponsors advised that the primary issue at hand is that there is not a systematic way to allocate resources to accomplish the goals laid out in the Joint Strategic Plan. The current system in place needs to be improved to not only increase confidence in the models, but also to offer a more global perspective. 

Week 1   2/16-2/22

During this first week of working together as a team, we have been introduced to our problem sponsor and have learned more about the challenge presented to us. We are learning the common practices of using Value Proposition and are beginning to conduct our first interviews. We have been experiencing some difficulty in getting into contact with some of the interviewees but are hopeful that we can get around this issue.