Reflecting on Summer 2016

It's been a few weeks since our loss and I still feel devastated. It hurts to watch or even think about the game and so I decided to leave LA and reflect on the last split. I'll try to elaborate on aspects that the public may be more unfamiliar with. While I've been with the TSM organization since 2015, this was my first split as a coach. The position has a great deal of responsibility, primarily because I have complete control over the players in terms of strategic direction. Since my predecessors left me with no discernible structure, the hardest part was building it from scratch. 

First, as a team we decided on a few game axioms upon which we built a simple system to evaluate different phases of the game. This gave the team a singular direction in the way we played, and gave a good structure to review and build our collective knowledge base. Next, as the person solely in charge of pick/ban, I had to develop a set of rules which took a lot of time. It started with simply building cohesive compositions, and every week added something new, adapting to patch changes, drafting to test matchups/strategic game plans, working around scrim opponents' specific tendencies, hiding a part of strategy or pick from every team, among the most relevant. Finally, there was the hierarchy of elements to work on: mechanics, game fundamentals, communication, higher-level plays/strategies, adaptations, etc. This 3-part system had to be constantly examined and optimized. For example, halfway through the split, I recognized that we had to establish a new "reset phase" to organize a subset of situations we were glossing over in reviews. After failures in the early game, we started actively devoting time to add to the playbook vision setups and plays from behind, something we were inexperienced at at the beginning of the split. All of these had to follow a certain hierarchy, depending on the weekend's opponent, patch changes, scrim partners, relevance of issue, and more.

I also had the pleasure of working with three extremely hard-working and talented people: Anand Agarwal, Brian Pressoir, and Matthew Schmeider. In addition to doing analysis and small projects for the team, one of their primary responsibilities was position coaching. After every scrim and team review, each player would meet with their position coach for the week to go over specific details of the game and personalized learning goals. We also brought each position coach for a week to work with us onsite, to help build relationships with the players and familiarize them with the details of working on staff where they helped me set the strategic direction for the week, work through prep and review, and pioneer objectives for the team to learn from. I want to thank all of them for their hard work and dedication throughout the split, their help was instrumental to our progress.

A large part in the success of Korean teams can be attributed to the systems that their organizations have built and have been refining for a long time. During the bootcamp and Worlds, I found myself questioning and changing core axioms of the systems we'd established earlier, things I could only learn through experience. I will never stop to wonder if we'd made some small changes in our approach, certain areas of focus, these small margins would've propelled us into the quarters or more. Despite a lot of factors that didn't go our way, I still felt each loss weigh heavily on my shoulders. These lessons are much harder to learn in NA, mostly due to lack of experienced coaching philosophies. In the same way that players need skilled opponents to practice against in order to improve, coaches, especially inexperienced ones, need different perspectives to learn from. This split, only Tony and Reapered were systematic in the way they approached (strategic) coaching, evident from their pick/bans and what strategies they practiced in scrims, but even then I could see the deficiencies in their team's style, as I'm sure they could see in mine. The best Korean teams are disciplined in the fundamentals of the game and adapt with a ruthless efficiency, and western organizations need to build and refine their systems to compete with them by allowing the coaching staff to gain experience and helping with professional development.

As disappointed as I am by our World's performance, I am equally proud of our accomplishments during the summer. We shared the same goal with a passion that resonated in every practice and stage performance, and the pursuit of excellence felt more important than the result each weekend. This ethos will be the ingrained as a benchmark upon which TSM will continue to operate. There is nothing more inspiring than working with a group of people willing to dedicate every minute of the day to the game. I'm glad our fans got to take this journey with us (thanks Max), and hope they were proud, connecting with the story of this team and how much it has matured over the course of the year, striving to do something extraordinary.

I love my team.

Analysis in Competitive League of Legends

Despite the tremendous strides competitive League of Legends has made recently, the industry is still fairly young with regards to how players interact with the support staff and management. While the responsibilities of the coaching role have reached a state of general definition from developing team's social and performance elements, the analyst role still remains relatively obscure with regards to scope and impact. An idea popularized in the past few years is that analysts most commonly watch a lot of competitive games and advise on pick/bans or strategy. While this is an important aspect this idea presents a fairly reductionist visualization of the role.

A comprehensive definition of an analyst is a person who maintains a high level understanding of the game by collecting as much relevant data as possible, constantly reviewing the data with experts in the community to interpret the details, and reporting on meaningful and actionable subset of information. The most challenging aspects of the role are remaining objective in the face of constant barrage of subjective commentary, cultivating and maintaining a network of relationships that can constantly help you critique your analysis, and most importantly, efficiently communicating all relevant information to the remaining support staff and team.

The figure above shows a sample subset of analyst directives, where the width represents relative importance to the team and the height represents the time invested or the analyst’s expertise in the topic. The figure will look different for teams depending on their priorities and the amount of support staff they have. For example, a team with a large infrastructure can generally pick up multiple members that excel in niche areas or devote time to specific sections, while it may make more sense for smaller teams to pick up a full time analyst who can focus on all general aspects, depending on their long-term goals.

 

 

Scouting


Different Regions

In line with the community perception of analysts, support staff often track games from different regions, notably both LCS regions along with Korean and Chinese leagues. A year or two ago, many teams didn't have the ability to invest in a proper coach or analyst, leading to the dominance of empirical analysis where the Korean competitive scene served as an adequate model to draw inspiration from. However, the top teams from other regions adapt the opportunity costs of their in-game decisions to maximize the efficiency of their picks and strategies, something that can't be determined by observation. Thus, if the assumptions are mistranslated, the utility that teams gain from using that innovative advantage decreases considerably.

In addition to watching the other regions, analysts are also responsible for translating the efficacy of particular strategies and picks within the context of their local region or team. Analysts help build an identity around the unique proclivities of their players and establish the team's original hierarchy of pre-game and in-game opportunity costs before using empirical data to bolster their strategy. Essentially, analysts are responsible for watching other regions, but the underlying purpose is to help the team and players make informed decisions about their own strategies instead of simply mimicking other teams. Recurring benefits of scouting different regions include new strategies, playstyles, or meta-shifts that have a seed in external regions. Short term benefits include having a more holistic perspective of opponents at international competitions like IEM Katowice or World Championships.

 


Upcoming Opponents
Scouting specific opponents has a more refined and deliberate approach. At the beginning of each week, a scouting report helps break down each opponent the team is preparing for and it includes everything from player's champion pools and play styles to general level 1s and lane swaps. Depending on the team, there's also a mention of damage output and gold allocation throughout the game and pick/ban patterns to consider what the team prioritizes before and in-game. After a discussion with other members of the coaching staff, it's decided how this impacts the week's scrim practice. For example, if the opponent is proficient at prioritizing a certain role or champion the team is not comfortable playing against, there's a discussion about whether it's worth altering play style to prepare counters or simply rely on specific champion bans. Finally, the day before each match there's another discussion finalizing pick/ban contingencies and specific in-game plans where the coach then gets the team's final feedback to make any changes.

 

Tools of the Trade

From post-game data to Riot's API, there's an abundance of readily available information for teams to consider. After extracting the data into a reasonable format, Tableau and Matlab allow analysts to dynamically consider data or perform in-depth analysis. For example, if a team wants to focus on the early game, the data helps find other teams that generate early game leads to answer questions like, is the gold concentrated in specific lanes or are the leads a result of overall pressure? How do these champions perform if they don't have early game leads? What are the opportunity costs of early dragons vs. towers? The answers to these questions help shape the direction of the conversation, and offer constructive precedence players can reference.

Generally, the data mentioned above isn't seen by coaches or players; analysts act as an intermediary source. But there is data that needs to be presented and manipulated by coaches and players. Excel and PowerPoint are traditional and powerful tools to help accomplish the task. Online excel documents shared within the team tracking scrim information let players go through and see where they did well, what they did poorly and help them present their arguments to the team regarding picks and strategic direction. PowerPoint or other presentation software helps summarize information for vision control, scouting reports, pick/bans and more. Finally, video editing software like Adobe Premiere allows analysts to put together presentations for reviews or introducing new concepts.

 

Introspective Analysis

BoxeR, the renowned Starcraft player and current coach of SK Telecom T1's Starcraft II team once described his strategy as one in which "even if the opponent had predicted it, he cannot stop me." Day[9] similarly advises his audience that "strategy and solid play doesn't revolve around tricks, surprises, or hidden information, but very solid planning and crisp execution."

An implicit and understated responsibility of analysts and support staff in general is to help team's development, rather than merely searching for new tricks or scouting other regional strategies. More often than not analytical resources are focused on external data, while team development follows a reactionary path rather than a proactive one. A methodology that reflects an infantile approach to delayed gratification, teams focus heavily on the short term in order to win games, rather than establishing a foundation for future success. New teams consistently fall prey to this trap, and while they may enjoy sporadic successes, they lack a cohesive identity and rarely achieve greatness.

In order to help direct team growth, support staff have to recognize the type of team approach that will best represent their players. Some teams are innovative and enjoy playing combinations of new or off-meta picks, while others are reflexive, those who have diverse pre-existing strategies and can adapt to various picks and game situations. Teams can be adaptive, those that heavily research and are the first to pick up and master the new meta styles, while other teams prefer a conservative approach, and shift slowly, letting their skill and teamwork carry them through the transition. Committing to an identity helps team create long term plans regarding how to approach patches, tough opponents, tournaments, roster changes and more.

Similarly players' development can take different routes. Mechanically adept players learn certain champions and playstyles at a different rate than tactical players. Other players are neither but are more open to the learning process, allowing the support staff to mold the player that the team needs. Analysts have to recognize the type of players they are working with in order to recommend plans for adapting during patches or preparing for upcoming tournaments and games. Investing the time to learn about the players and team before setting long term and short term goals is important for any team that wants to set themselves up for long term success.

 

 

Final Thoughts

After the coaching role is established, the analyst role becomes the next most important on the support staff. There is a considerable amount that analysts can be responsible for depending on the goals of the team. Some teams simply want to place well in their region, while others want to win worlds or create a lasting team legacy. As the expectations of the team increase, it becomes critical for the organization to invest in good analysts who not only knowledgeable about the game and various scenes, but can also communicate and present their ideas in a clear and persuasive manner. For a team expecting to maintain a top place in the regional standings and doing well at international competitions, the reasonable expectation is 60-80 hours of analysis work per week, from planning and creating content to scouting regions and analyzing scrims and more. We are slowly approaching an era in team e-sports where player and team development is going to become more important than raw talent, and surrounding the coach with a strong supporting infrastructure will ensure a team's long term success.

Not All Leaks are Created Equal

randall weem.png

Introduction

On December 29, 2014, the moderators of the League of Legends subreddit created a thread to discuss the topic of leaks and rumors. In response, DailyDot writer Richard Lewis posts a video on his YouTube channel to defend leaks on three premises: freedom of speech, public right to information, and objectifying the narrative. This piece will critique each argument in order to show that leaks can be good or bad, depending on the context in which they are presented.

The majority of this piece will revolve around quotes from each of these sources: Richard Lewis’s video discussing leaks, the preamble from the Society for Professional Journalists (SPJ) and the work of Kirk O. Hanson, the executive director of the Center for Applied Ethics at Santa Clara University. The tags below contain links to the relevant sources.

 

What is a leak?

A leak is defined as the intentional disclosure of secret information. 

This information can be personal. Consider a man who just proposed to his girlfriend and hasn't broken the news to his friends or family. A colleague who overhears the conversation and shares it publicly without the consent of the couple, despite the intention, has leaked the information. A couple who wanted to broach the sensitive topic with their parents, or have a quiet wedding with close friends, now have the awkward task of handling a narrative they didn't perpetuate.

Similarly, all organizations hold a wide array of information that’s secret due to its financial value, marketing potential, research and development specifications and more. Corporations seek to protect information due to the economic value from not having this information widely known. This concept is relevant in professional e-sports from team strategies to recruitment methods. For example, a player or organization with multiple options can leverage their options and dictate terms. But, a leak regarding their internal discussions mitigates this negotiating power.

 

Value and ownership of information

Similarly, marketing players and sponsors is a relevant part of the e-sports ecosystem. In the video, Richard Lewis goes into further detail:

In alignment with the discussion surrounding value in information, Lewis lists several sources of marketing revenue for organizations, and also points out that the original source of the news garners the most web traffic for that particular piece of information. Therefore, by leaking the information, Lewis diverts most of the web traffic and attention away from the organization and to his publishers like DailyDot. Naturally, there’s a long list of organizations frustrated with Lewis’s reports either because they undermine some strategic plan or diffuse the value of their information. In response to this Lewis comments:

Business regards information as a commodity and the possession of it as an asset. Economists would like to treat and account for information in the same way as physical assets; however, no discipline has provided an accepted model for such treatment although analogies abound.  

As inventory, information goes through the value-added stages of raw material (events or processes to be measured), work-in-progress (information in development), and finished goods (marketable information). Information gathering and presentation require capital investment and human labor. Besides being costly to acquire, information incurs management costs. Like physical assets, information faces quality control inspection before it can be distributed. (source)

Along the same lines, e-sports organizations have information that they want to protect from proliferation until they can redeem it for its value, such as roster changes, management decisions etc. for variety of reasons. But, Lewis claims that information within this context cannot be owned and implies that while he sympathizes with the frustrations of abused trust within their organization, he has the right to publish the information if he can obtain it and confirm its accuracy.

The problem with Lewis’s defense is that there is a lot of ethical and legal precedent to support that information of value can be owned and protected. From the personal privacy rights to corporate ethics surrounding proprietary information and more, the leaking of protected information is a serious issue. Even the SPJ declares as part of their core principles that:

This principle implies that just because you can obtain private information legally, you must also have an relevant reason for then sharing it with the public. Regarding the acquisition of information itself, Lewis himself quotes the SPJ to maintain the anonymity of his sources and goes on to posture his sources’ contractual obligation to conceal information does not concern him:

Lewis elevates his sources to some sort of undercover-hero status, with a tone implying that his sources innocently found themselves embroiled in a conspiracy or danger, and that they are now willing to risk their careers and livelihood to expose some deep-rooted corruption. But when we consider leaks in the e-sports community, only some of them actually expose malicious behavior or intent. Most of the leaks that have sources "close to the subject or team" should not be revealing the information they do. They’re breaching the trust of their friends and colleagues and should not be considered role models in the community.

Hanson elaborates on the ethics of confidential information and sources:

Hanson points out that any protected information that is obtained by a source that violates an obligation to keep it a secret is morally dubious to leak without a valid reason for doing so.

 

Public Interest vs. Public Right to Know

Lewis next weighs in on his reasons for revealing protected information, claiming that he weighs the discomfort to organizations against the benefit of the public and justifies it so:

While the argument may sound reasonable, there is a clear philosophical distinction between what the public is interested in knowing and what it has a right to know. People, even famous public figures and elected officials have the right to privacy despite the details of their private lives being the 'public interest'. Similarly, organizations retain the right to share or withhold information pertinent to their competitive advantage.

Referring back to the SPJ, the code of ethics state that:

Leaks, by definition, imply that the information should not have been accessible by proper channels, and preys on the curiosity of the community (excited to learn forbidden or secret information) to generate revenue for the website, at the expense of the involved parties.

For example, during the off-season, Martin ‘Rekkles’ Larsson, considered by some to be the best western AD carry, moved from the veteran Fnatic team to the new European powerhouse, Alliance. A fan favorite, the spark of a rumor in mid-October regarding this possibility resulted in many speculation threads and self-proclaimed Reddit detectives to search for evidence supporting this claim. 

Lewis, aware of the public interest surrounding the issue, published a series of articles in the span of a few days like "Rekkles in talks with Alliance" to "Rekkles has had enough, buyout option is 15k", garnering front page status multiple times and yielding tens of thousands of hits. There’s however a difference between excited members of the community speculating about rumors and leaked articles posted by journalists claiming to have inside sources:

Lewis acknowledges that when he publishes a leak, the information transforms from a speculation into a state of quasi-confirmation. When two organizations are re-negotiating sponsorship details and contracts, transitioning players in and out of their teams, the process is time and information sensitive. Leaks force all involved parties to scramble and shift their plans to work around the loss of what they previously believed was privileged knowledge.
 
Speculation is an expected by-product of the community, to be taken somewhat lightly, but leaks that feed on the public interest by sacrificing the ability for teams, players, and organizations to conduct their private businesses are a serious matter, especially when their only purpose is to convert this community interest into clicks and revenue.

 

Controlling Spin

Lewis offers another reason to support leaks, by claiming that organizations tend to spin the information to suit their selfish narratives.

There are two sides to the spin issue, first of which is marketing. For the sake of self-promotion, you replace an individual, organization, or identity with a persona that offers a limited or incomplete picture of the real thing. Individuals spin themselves during interviews; advertising spins their companies and products. E-sports personalities and players are no different. One team may explain a roster move as a skill-based change or a personality-shift to support different narratives like competitive edge or dedication to their fans. However, there may be a series of complex underpinnings behind the decision such as salary, appearance, marketing, and more.

As Lewis rightly points out, the other side of the issue is perception. The audience needs additional information to assess if the conclusions offered or perpetrated by the source are valid or accurate. Independent reporters have the opportunity to provide a different perspective on the same issues. They can challenge assumptions, talk to experts, and present their own conclusions regarding the issues. However, Lewis then goes on to explain the mechanics of independent reporting:

Lewis implies that independent reporters are not motivated by the need or desire to spin the information…and then in the same thought, explains how readers that support them helps build reputation and generate revenue…the same reasons why most organizations or parties choose to control the narrative of information in the first place. In order to further contest this point, let’s consider the primary thrust of this video where Lewis leaks information regarding a secret Riot meeting regarding content creation:

In his diatribe, Lewis uses the knowledge of an internal Riot meeting a few weeks before the video to perpetuate the narrative of Riot as a greedy and manipulative organization trying to quash community content in order to have full marketing control over their product and infrastructure…while the evidence he offers to support his claims is tenuous at best.

But let’s delve into some of the issues: While Riot has assumed broadcasting control over the Chinese and Korean scenes, it is far more likely that the reason for doing so is to offer better produced free, content for the viewers. Also the content from in the new Riot talk show barely overlaps that of shows like Summoning Insight and First Blood. It even some promotes (albeit Riot-filtered) community content. From this perspective, these aren't particularly sinister in nature, but rather fairly logical and straightforward decisions from a company standpoint, and while their methods may be inexperienced or vary in efficacy, Riot’s dedication to their players and fans has never been in question.

This perfectly valid conjecture highlights some of the good resulting from Riot’s recent moves. Yes, a discussion perhaps needs to take place regarding the advantages of organic growth by promoting 3rd party content versus in-house investment in production value and marketing control. But clearly, there is more to the conversation than Lewis presents in his editorial.

It is common knowledge that Lewis and Riot have a poor working relationship, the Deman-IEM incident as the most recent conflict. Furthermore, he disagrees with a majority of Riot’s business policies regarding control of their product and surrounding infrastructure. According to many journalists in the community, if Riot begins to publish and promote their own content, it diverts business away from independent sources, Lewis being among them.

So, while Lewis offers a new perspective on Riot’s recent string of announcements, it is certainly not an objective one. He has both personal and professional reasons to perpetuate the negative narrative about Riot, spinning the information and evidence to support his claims, and disregarding the positive effects of Riot’s new initiatives.

In his assessment comparing good leaks to bad ones, Hanson points out that:

It is a misconception that journalists are unbiased or that they have no vested interest in how they present the news. Cable news networks like Fox News have been constantly accused of promoting conservative political positions and criticized for biased reporting. Web-based articles utilize click-bait headers and skew their reporting toward sensationalism to grab readers’ attention at the expense of honest, objective perspectives. 

To summarize, as Lewis rightly points out, independent journalism should exist to offer new and different views. But it does not justify leaking information to prevent organizations from establishing their own narratives. Journalists are still subject to personal biases and financial motivations in their reporting and players/organizations have the right to market themselves in the manner that they choose.

 

Conclusion

Leaks can help provide alternative perspectives and reveal malfeasance, but not all leaks are created equal. Players, teams, and organizations have a right to privacy to conduct their business. If these rights have to be infringed upon, through furtive methods or asking confidential sources to break their NDAs and/or trust of their friends and colleagues, there should be a good reason for it. Leaks that are self-interested and financially motivated and use dubious methods to acquire the information, simply to reveal things before organizations can, should be seen as ethically immoral. The community approval of leaks will not wane. It is human nature to be curious and want access to private information on topics that interest them. The responsibility falls on the journalists to assess what good and harm leaks can do, and act accordingly.

Learning from SK Telecom

Following the mass Korean Exodus, SK Telecom T1 remained one of few teams to retain a full roster of top-level players, resulting in a crucial developmental advantage compared to the other teams that competed in the OGN pre-season. While teams like Samsung were still laying the foundation for their new (albeit extremely talented) roster, SKT forged ahead by testing the limits of relevant strategies within the context of their team, ultimately finishing the pre-season well ahead of the other teams. With the LCS about to start, it’s important that the coaches and analysts look towards SKT as an example; not merely to see what champions are strong in the current meta-game, but also their approach to pick/ban phase, their resource allocation, and their vision strategy.


The Pick-Ban Mind Games


Before delving into the pick ban phase, let’s first consider the landscape of relevant champions in this meta-game. There were 54 different champions picked over 36 games, 7 of which were used in multiple roles. Some of the more contested picks for each role are shown below, along with their pick, ban, and win rates.

 

It’s interesting to note that Gnar has been banned in all 36 pre-season games. Corki and Lissandra, despite being present in 83 and 81 % of the games only show a win rate of 35 and 33% respectively. With the exception of Ahri and Leblanc, the majority of mid champions are flex picks, a list that includes Jayce, Ezreal, Kassadin, Lissandra, Morgana etc. This flexibility also extends to item builds and strategy, allowing champions to overcome weaknesses at various points in the game, further opening up the possibilities for the pick ban phase. For example, Renekton normally struggles against Jayce; however, in their game against Samsung, SKT’s Marin opted into the matchup. With some pressure from Bengi and an unorthodox full damage build, Samsung’s Cuvee ended the game with a score of 1-8-2, 100 CS behind Marin.

These changes make pick ban phase much more interesting than ever before. Coaches, who are now allowed to participate in the process, not only have the responsibility of helping choose the champions but can also help outline strategies and objectives for that particular team combination. With that in mind, let’s consider SKT and their coach Kkoma’s brilliant pick ban phase against Najin em-Fire.

 

Each pick-ban phase is a separate game, played by exchanging information and bound by player skill and time. Recognizing that Najin has the first pick in Game 1, SKT choose to trade power picks for flexibility. Najin opt for Janna as their first pick, followed by SKT picking up Lissandra and Jarvan IV. Najin pick up Corki to complete their power bot lane set, after which SKT pick up Ezreal and Alistar. At this time, SKT does not know where Kassadin is playing or the last two champions, but between Lissandra, Jarvan IV, and Ezreal, SKT can use their last pick for top, jungle, mid, or AD Carry.


 
After Najin pick Zed and Elise for their final rotation, Kkoma picks Lee Sin for Bengi, locking in Jarvan IV for top lane. This forces Najin to lane swap out of the Kassadin-Jarvan IV matchup and allows SKT to avoid facing Corki and Janna in the bot lane. While this game exemplifies Kkoma’s emphasis on pick flexibility, the next game he manipulates power picks to focus on team synergy, putting Jarvan IV and Lissandra different positions and using Jayce and Graves-Janna to disrupt mid and bot respectively.

Four of the five games Faker played in the pre-season were on flex champions like Ezreal and Lissandra. It’s an approach to the game that uses the highly skilled players to adapt for the team, rather than having the team cater to the strengths of the star player. Professional players are an aggregate of many elements: mechanical skill, game sense, flexibility, etc. The pick-ban phase in Season 5 will test not only team’s knowledge of the current metagame, but also how they create an identity around the unique proclivities of their members.

    
Game Economy and Resource Allocation


A game of League of Legends has limited resources on the map for its players in terms of experience and gold. Moscow 5 recognized that Darien’s repeated deaths allowed them to pick up dragons and buffs from the other side of the map, while their enemies lost wave after wave chasing Darien. CLG was criticized for having a singular strategy of feeding Doublelift and letting him carry the team. These are but a few examples of how some teams use resources differently.

The question then becomes, how can teams use resources efficiently? In terms of gameplay, the jungle and support roles are examples of how a team concentrates resources onto the top, mid, and ad carry roles during the laning phase. But once the laning phase ends, each team has a different philosophy on which members receive farm, push out lanes, and control vision. 

A top laner who is split pushing by himself will have more farm than one who is responsible for defending a turret with his team. A mid laner who has to retreat due to jungle pressure gives up a wave of creeps. While these seem like disparate events, teams influence resource distribution within the team based on their objective focus and overall strategy. The figure below shows the distribution of CS within each team during the preseason.

 

Teams like Samsung Galaxy and Najin prioritize farm onto their AD carries at the expense of their top laners, while KT Rolster does the opposite for its top laner. Teams like CJ Entus have a tight distribution, while other focus on their mid laners, notably IM and SKT. This data gives valuable insight into a team’s strategy and how teams will react. Let’s now consider the new SKT, a team composed of members from both K and S teams, specifically the top lane.

Another impressive facet of SKT’s strategy is their deliberate allocation of gold to their players. Unlike Impact, Marin prefers a carry-oriented approach to the game. His aggressive style relieves pressure from the map and has a strong presence in teamfights alongside Faker. However, this requires additional gold for him to buy items and remain relevant against his opposing laner.


 
The figure above shows the average CS difference for each player compared to their lane opponent at different phases of the game. During the preseason, when Impact was in the top lane, Faker and Bang would end on average with 121 and 39 CS above their lane opponent while Impact would be even or just below his lane opponent. In the games with Marin however, Faker and Bang’s CS leads are halved, CS that’s donated to Marin. This conscious effort has been rewarded by Marin’s impressive 10.5 KDA in their wins, compared to Impact’s 4.3. Marin and Bang have helped create a much stronger and relevant identity for SKT that wouldn’t have been possible with Impact and Piglet. If Marin was forced to follow Impact’s established utility role, SKT would not have been nearly as successful as it has been so far.

Western teams have segmented the game into phases of objective control, but resulting in haphazard distribution of resources. Sometimes farm is deliberately funneled to members in ways that is counter-productive to team’s goals. It’ll be interesting to see whether teams like CLG and Curse accommodate their new solo laners, or force them into established roles set by their predecessors.


Final Thoughts


SKT has started the preseason in dominating fashion, dropping only 1 game out of 10. Is this a foreshadowing for things to come? It’s hard to tell. SKT certainly looks like the most polished team; Bengi’s mechanics have drastically improved and the team has found a modicum of synergy with Marin’s carry-oriented style and Bang’s calculated approach. But, veteran teams like Najin and KT aren’t to be dismissed. Najin didn’t play Ohq in their set against SKT, a player who has ensured victory in every game he’s played in the preseason. Additionally, teams like Samsung and HUYA have shown more than glimpses of potential. Samsung especially who have shown time and again that their mechanical skill is top notch, will only get stronger under their veteran coaching staff as the year progresses.

Shifting to the west, the examples above show how SKT have formed an identity, displayed in their pick-ban phase and resource allocation. But if western teams are to compete against other regions, they must innovate themselves, discover their own opportunity costs and fortify weaknesses instead of mirroring other teams. Since Season 2, western teams have played behind the curve, relying on empirical analysis from regions with a more established infrastructure.
 
With Riot’s acknowledgement and financial support for coaches and analysts, it’s time for teams to learn from SKT, not only superficial elements of meta champions and vision strategy, but the deeper insights into how they approach the game to accentuate their players’ strengths within the context of the team. Will CLG and Alliance give the proper resources to their new members? How will Team8 and Roccat use the unique styles of their top laners? It’ll be interesting to see if the west adapts in the new season.

Elevating the Educational Experience

Introduction

Elevate is a recent addition to the genre of brain fitness games. The company markets itself as a cognitive training tool designed to build communication and analytical skills where members are provided with a personalized game-based training program that adjusts over time based on performance. It features a free-to-play mode with limited features and a subscription program that unlocks additional exercises and modes for 4.99/month. Since launching in May 2014, Elevate has been a commercial success with more than 5 million downloads on the App Store and Google Play. It has also been selected by Apple as ‘App of the Year’ for 2014.

After installing the app, Elevate asks the user for preferences in training goals (ex. Articulate your thoughts more clearly), and more recently has added a formative assessment that tests you in key areas to gauge your initial skill level. With its emphasis on communication and analytical skills, the application tracks the users in speaking, writing, reading, listening, and math abilities. Each of these categories has 3-4 exercises that help users can complete in order to gain proficiency points for the ability. The difficulty of these exercises increase with performance up to a maximum level. Elevate unlocks 3 semi-random exercises every day, depending on personal training goals, and tracks the user performance history. The exercises are all unique in design and execution, each with its own graphics, sounds, and user interaction.

In this article, we will delve into the machinations of Elevate because it exemplifies the modern learning experience combining design elements with a data-driven approach to learning. We will consider its engaging user experience and explicit tracking of proficiency levels, while comparing them to their counterparts in public education. While Elevate is targeted towards a different audience with its own goals, it’s possible to consider tangible elements from its structure and apply them the learning experience in traditional schools.

The 'Diversifying Design' section below refers to these case Studies of the five Elevate subjects along with a featured game evaluated for content, engagement, and validity.

 

Diversifying Design

 

For Learning

On average, students’ learning experiences are extremely structured, often at the expense of engagement. Especially towards secondary schools, classroom layout follows a pattern, there is little variation in teaching style throughout the year, and students spend the entire day sitting in desks, taking notes, listening to the instructor, participating in some discussion, or filling out assessments. The biggest criticism of students not engaged in learning is that school is ‘boring’. Most adults assume that this comment is directed at the content, but perhaps it’s equally important to consider the environment and how the content is presented.
 
One of the most remarkable attributes of Elevate that distinguishes itself from traditional learning is the number of different ways players can interact with the games. Even in the confines of the application, you can tap the screen, swipe left or right, choose between responses, drag facts to appropriate markers, build a tower, and more. Let’s consider the Mathematics Case Study. The featured game teaches players how to organize units in different systems of measurement, by having them build a tower of length or weight units in order. In school, the student will learn the ratios between the numbers and convert them on a test, perhaps with a few examples. Especially as we progress to middle and high school, concepts and ideas are taught, practiced, and assessed in only abstract terms. This game design allows student to visualize explicit and relevant numbers. In order to succeed, you must still internalize the proper ratios, but the application of this knowledge is more meaningful a range of learners, and allows abstract, visual, and tactile learners to absorb the information.

Singapore math is one traditional approach to mathematics education that uses some of the principles outlined above to teach standardized curricula. The method uses three steps to teach students the same concept. First, students engage in hands-on learning using concrete objects, followed by drawing pictorial representations of the concepts, and finally solving problems by using numbers and symbols. Variations of this methodology are still employed in elementary school, but concrete and pictorial exercises don’t mature well with students, and are therefore discarded in secondary education. Digital tools offer a clean solution to this problem, and well-designed applications or games can supplement in-class instruction to provide diverse and interactive learning opportunities in not just mathematics, but other subjects as well.

 

For Testing

Assessments present an interesting design issue, because the process itself is so detached from learning. Standardized tests use Scantron paper forms which allow machines to quickly translate responses to scores. Widely used to save time and resources, the test limits the questions with discrete answers, where the student must choose between options. Many secondary schools began to follow this trend, partly for their benefits, but also because it helps prepare students for the standardized tests to come. Even schools and teachers that don’t use it directly employ the same elements in their tests like multiple choice, true/false, and matching questions. This simplification comes at the expense of student learning by introducing discrepancies between the goals of the knowledge or skill and how it’s being tested.

Alternatively, let’s consider Elevate’s approach, where assessment and learning are intertwined and game design reflects the development goals. The games offer instant feedback and corrections to mistakes, prioritize accuracy and speed differently, and reward streaks and specialization over comprehensive knowledge gain. 

For example, the Syntax game in the Writing Case Study helps you identify grammatical errors. It displays a highlighted word or phrase within a sentence, and you must choose if its usage is correct. If the usage is incorrect, regardless of whether you were able to identify it, the game displays the proper replacement in its place. This adds a dimension to the otherwise simple true-false structure where the player gains more information than just whether he was right or wrong. Both, the Conversion game in the Math Case Study and the Precision game in the Speaking Case Study have similar features where they provide immediate feedback or additional knowledge to the player after his response.

Also, the game designs also reflect the values of the goals. The Syntax game features a boat trying to reach the harbor before the sun sets, personifying progress and time respectively. The boat moves forward with every correct answer, but back for every mistake, and the game is scored on the time left. Therefore, a player who is quick but makes an error may earn more points than one who is careful. It’s an assessment designed to prioritize speed and efficiency of the skill. Conversely, the ‘Processing’ game from the Reading Case Study uses its design to control reading speed, but values accuracy by testing comprehension—you lose the game if you make two mistakes in the session. These are but a few ways that design in assessments can help cater to specific goals and skill objectives. 

Over the last few years, teachers have begun to use an assortment of formative analysis and digital tools to provide gauge student learning and provide feedback. One widespread example of this innovation is the usage of the clicker, which allows teachers to gauge the proficiency of students to determine if more time is required to cover particular concepts. But a majority of these tools are either diagnostic in nature, directed towards teachers, or derivatives of simplistic evaluation metrics. A combination of good design and digital tools can transform assessments into powerful and engaging learning tools that empower students.

 

For Engaging

The final argument for diversifying the design in education is engagement. Our culture has evolved considerably within the past few decades, especially with regards to technology and how we interact and communicate with the world. The current generation of students has grown up in a world where they have unfettered access to an expansive world within the internet. They have always known the accessibility of smartphones, computers, and tablets. Their videogames and movies feature incredible graphics, and unprecedented interactivity. The consumer culture is accompanied by escalation in engagement metrics that our current system of education has been unable to match so far. It’s only natural that students will feel uninterested reading from textbooks and copying notes from a chalkboard.
 
Much of Elevate’s novelty comes from its visual design and interactive games. Consider Focus from the Listening Case Study, where users listen to a conversation between two people regarding a theme with multiple subtopics. These subjects are visually represented by three different circles on the screen. As facts about these subjects surface in the conversation, they get added to the screen as a hollow ring. Players must drag the ring to their respective topic circle, after which the game visually and audibly rewards you with a flash and sound effect. These fact rings then rotate around the circle, until the topic set is completed, and the user is left with a list of facts on a topic. While the purpose of the game is actually to improve concentration and memory, it can just as easily be a more modern approach to note-taking, where audio and visual cues reward adding information.

The process of increasing engagement through digital tools is met with resistance from some educators who provide some variation of the reasoning, ‘I learned things this way, why can’t they?’ The problem with this line of thinking is that the goal of education is to prepare its students for the future. It not only takes more resources to teach students with tools outside of their comfort zone, but it also isolates their learning from their interests and peer interactions. If education is to cater to the needs of children, educators need to adapt and teach through a language that’s familiar to students.

 

 

Data-Driven

 

Students in the current public system maintain an unhealthy relationship with numbers that are used to evaluate them and their peers. Assignments, quizzes, participation, reports, tests, projects, and exams all provide a number that is supposed to measure students and hold them accountable for their learning. In this section, we will examine the philosophy behind how Elevate measures and uses the proficiency levels of the players and compare it to the approach in schools.

Compared to the escalating linear approach in schools, Elevate employs a more gradual, cyclic approach. Each subject category is divided into core competencies rather than units. Players begin at a low difficulty in each game, and their performance determines both the subsequent changes in difficulty and proficiency levels. This creates a system that assesses students, reveals the explicit metrics used in the evaluation, tracks trends in student performance, and refines itself for a personalized model of achievement that values constant development.

 

Transparency in Numbers

Elevate shares all explicit numbers with its players behind the details of their performance. In the same way that the game’s design highlights certain objectives, the scoring system explicitly states the different ways students can improve.

At the end of each session, the game displays the base score for completing the game as well as additional bonuses for speed, accuracy, and difficulty. This has several implications, notably, that the game explicitly notes the value of participation by having a base score. Essentially, the game assigns a value in the player putting in time to practice a skill, and the learning that takes place because of it. Next, the speed and accuracy bonuses give players points for finishing faster and making fewer mistakes respectively than what is required of them to complete the session. Finally, the difficulty bonus varies depending on the level; the bonus is negligible at lower levels, but gives substantial (20%) at higher levels.

The numbers also help target specific skills for each subject. For example, the Proportion game asks players to scroll through and match various fractions to their respective decimals and pictorial representations. As players start to improve, they become more familiar with the ratios and are able to complete the exercise faster. Conversely, Retention asks the player to listen to a list of things and answer questions about them from memory. The speed at which the player responds is negligible in value compared to his accuracy because intuitively the skill is developing your memory skills, not the speed at which you can respond.

When you consider assessments in schools, students can’t easily discuss their proficiencies, because they are never explicitly stated. In the course of reviewing homework, or receiving tests and quizzes, both students and educators are made aware of how many mistakes were made, not necessarily what they specifically were or if they are relevant. In order for students to improve, there needs to be motivation, but also clarity regarding the direction. For example, if a test is designed to delineate between structures (multiple choice, matching) instead of subject categories (fractions, graphs), it’s harder for the students to recognize where they performed well, and where they faltered. A student who can see that he missed four graph-based problems is better informed that one who sees he missed two true-false and two matching problems. There are a host of similar initiatives that schools and teachers can utilize by finding synergy between transparent assessment structure and course design. 

 

Tracking Scores and Recognizing Trends

Elevate’s loop structure of Assess > Reveal > Refine > Assess… has another enormous advantage; the structure allows players to track their scores and recognize trends, a sort of meta-learning. 

At the end of each session, Elevate displays a graph of your recent scores in the game and that your high score for the game is. The graph above shows a player slowly developing in two subjects, Reading (Connotation and Visualization) and Math (Proportion and Tipping). If we assume that the design of progression is consistent, then several important points become significant. First, despite Proportion and Tipping both being subcategories of Math, the skill at the onset and development is different. For example, as the difficulty in Proportion increases, the player goes through phases of equilibration, where he adjusts to the new standards, resulting in performance dips. Meanwhile, Tipping shows a slower, but more consistent growth.
 
This information is valuable to players because it allows them to not only gauge their current status, but also how far they have progressed in a given period of time. Next, as players achieve the highest level of mastery in the game, the scores normalize for difficulty, but players can still improve in other metrics such as speed and accuracy, encouraging players to continue playing the game to maintain their skill. Elevate’s goals are much more grounded in application and place less relevance on the consistency of objective metrics; however, the model presented is extremely relevant when applied to schools.
 
Students often don’t have an accurate gauge on their skill level in subjects, and their perception depends on their test and exam scores, due to their value on final grades. Similarly, parents have to rely on teacher conferences to glean their child’s growth, and even these conversations turn out to be subjective depending on parent’s expectations. Finally, while teachers have a general idea regarding the proficiency of each student, their understanding is based on in-class interactions and discrete scores.

Valerie Shute and Matthew Ventura offer the following metaphor. Retail outlets in the past had to close down once or twice a year to take inventory of their stock. But with the advent of automated check-out and bar codes, these businesses have access to a continuous stream of information that can be used to monitor inventory and flow of items. Not only can a business continue without interruption: the information obtained is also far richer than before, enabling stores to monitor trends and aggregate data into various kinds of summaries as well as to support real-time inventory management.

Similarly, Elevate’s model applied to schools can help both students and parents recognize development in various categories throughout the year. Students will have a solid basis for introspection regarding their strengths and deficiencies. For example, a student completes daily homework assignments in a digital or web-based application. Over the course of two weeks, the application points out that while the student has kept up with expected math skills, but his Algebra scores are lower than average. As the student starts to make a conscious effort in the category, the Algebra scores begin to improve.

Teachers benefit enormously as well. The past decade has seen an increase in initiatives to measure teacher effectiveness in the classroom. However, one major criticism of this system that teachers cite is that these evaluations don’t account for the starting level of the students. If a student is too far behind at the onset of the class, teachers point out that it’s impossible to get him to the appropriate standard. Tracking performance data helps alleviate a lot of these issues, by demonstrating student levels at the beginning, as well as their development throughout the year. Additionally, teachers can compare growth rates and proficiencies among clusters of students, helping them tailor content and pace in the class.

This process of data tracking is not simple; it requires both a proper infrastructure and class designed around its usage. Teachers need professional development to be able to interpret and discern relevance from this information and students need to be able to use the tools effectively. But the potential benefits after the capital investment are recurring, both for the teachers and the students. 


Adjusting Pace and Tempo

In order to develop the player skills in its games, Elevate tries to achieve a difficulty range between comfortable and challenging. It does this through an iterative process responding to player performances, and uses the same methodology to adjust player proficiency. 


At the end of each session, you receive a score composed of a base score along with bonuses for difficulty level, accuracy, and speed. The game evaluates your score and adjusts your difficulty and proficiency level accordingly. The graph above shows normalized scores compared to the subsequent difficulty increases. Additionally, the player’s performance in each game also affects their proficiency in the subject. For example, the graph above compares the scores from the game ‘Retention’ to the resulting increases in the ‘Listening’ subject proficiency. In this scenario, the player receives 6-8 proficiency points per session because his current level is Listening is quite high. If the player had a higher or lower proficiency level, the points per session would lower and rise respectively— a novice level player would gain more for the same performance than someone who is an advanced or expert level.

This concept of proficiency is a mastery-based approach to learning instead of a score-based system. Players have a proficiency rating for each subject: 0-1250 for novice, 1250-2500 for intermediate, 2500-3750 for advanced, and 3750-5000 for expert. The formative assessment at the beginning of the game places you at a proficiency level. You now have to do several things, first of which is play different Listening games because mastering and repeating one game will lead to marginal returns in terms of proficiency points. Next, you have to maintain or increase your level of performance in games; otherwise you will begin to lose points. Elevate has quantified breadth and depth of skill or knowledge into its subject that motivates novice players to improve and expert players to maintain their performance. More importantly, this happens at an individually appropriate pace set by difficulty levels that iterate in response to your sessions.

The application of this flexibility already exists in the classroom in the form of differentiated instruction, where teachers respond to the needs of their classroom to cater to specific groups of students or results of recent assessments. Most classrooms contain a diverse group of students with regards to skill level, learning abilities, and interests. Teachers can differentiate content or process to best suit the needs of their class. For example, if the recent assessment shows that the class is behind the curve, the teacher can reiterate concepts or adjust her pace. If current events are of interest to students, the teacher can leverage that to teach concepts, or if the students in the class are technologically inclined, the teacher can use more digital tools to teach. 

The biggest issue with differentiation is that no technique, direction, or process is completely inclusive. Slowing the pace in the classroom inhibits the growth of students who have the potential to learn a greater depth of knowledge through a rigorous, challenging pace. The same theory applies to adjusting process and focus. Digital media, particularly video games, have refined the ability to engage players and develop relevant skills at a pace that is challenging to them. If we are able to extract these elements and apply them to our assessment philosophy, it will help alleviate the stress of having to differentiate from the teachers.

 

Conclusion

Integrating technology in the classroom allows schools to establish some equity in terms of learning and opportunities. The problem is that the efficiency of technologies and digital tools used in schools is primitive in comparison to those used for in every other aspect of the current culture. Elevate shows how the proper integration of design can create an engaging and personalized learning experience that also highlights focus and objectives. For example, the same elements applied properly to standardized curricula of mathematics, literacy, sciences, etc. have the potential to create a more engaging and relevant environment for students. 

The theory of Connected Learning proposes that students motivated by understanding how particular knowledge or skill is applicable to their own lives will yield the most meaningful and sustained learning. However, the current system prioritizes accountability through scores and assessments that determine student’s level of academic achievement. The Elevate Case Study demonstrates that there is a compromise between the two by making the assessments themselves engaging and relevant enough that they produce learning. In order to be practical, this entails realigning the philosophy of assessments from stigmatizing mistakes to rewarding development and redesigning the structure to diversify goals and offer feedback. 

Formative assessments, especially short-cycle ones can help bring about this philosophical shift. Elevate shows how a data-driven approach using a series of short game sessions can provide meaningful information about the learner and personalize their learning. This information can help students focus their learning and help connect parents to the full details of their children’s learning. When this assessment approach is coupled with teacher’s professional development, they can leverage data to compare and track group performances, and employ more efficient differentiation in the classroom.

Our current system of public education is a protracted process of university entrance (later job placement), and as a result, the deeply-rooted culture of comparative assessment is not likely to change dramatically in the next decade. However, we can shift the surrounding infrastructure so that these assessments are meaningful to students, teachers, and parents in terms of development, rather than reductionist measures of judgment.

 

 

 

Homeschooling in the United States

Homeschooling in the United States is not the most relevant issue in the context of the institution of public education in the United States with the exception that of the parents that home-school their children, 74% cite dissatisfaction with academic instruction at schools and 91% are concerned with school environment. Additionally approximately 50% of these parents claim one of these two issues as the primary reason for their choice in homeschooling their children. As a result, the 3% of school-age children currently home-schooled are exposed to a unique education, bereft of standardized practices and traditional social dynamics. Over the past few months, I had the opportunity to interview several parents throughout the United States, whose methodologies, motivations, and results, offer an in-depth perspective on the philosophical and heuristic efficacy of the practice

Read More

Working with Alliance as a Remote Analyst

Last month, I had the opportunity to perform some remote analysis for Alliance as well as offer a second opinion on a lot of topics surrounding scrims, tournament, mindset, approach etc. I wanted to thank Jordan for involving me in the process and allowing me to share portions of my brief journey.  It was invaluable experience and gave a great insight into the professional scene. 

 

Introduction

Someone had posted on Leviathan's ask.fm account where Jordan mentions that he would like to help any analysts trying to break into the scene. While my end goal isn't to simply do analysis, I thought it would be an interesting experience, so I sent him a detailed email with my resume and a reference that my project manager at Boeing wrote for me along with links to my website and writing asking to be introduced to a team in NA. With his busy schedule, I wasn't actually intending an email back before Worlds was over so I was pleasantly surprised when a few days later he replied saying that he was impressed with my qualifications and thought my perspective would be valuable. Additionally, he asked if I would be interested in helping him and Alliance at Worlds. I was pretty excited to just be involved in the process and I added him on Skype immediately.

The next day, we spoke on Skype for the first time and he mentioned that he'd read some of my articles and thought I would be a positive influence to the team. We delved into some core topics in that first conversation and towards the end, my first assignment was to review their scrims and take notes. They were BaronReplay files, a software I had never used until now, but it was pretty intuitive and easy to set up. A few days later, I was also tasked with recording and uploading the video versions of scrims onto youtube privately so that the players could watch them on other devices. He also mentioned the idea of creating a video library of games for players, which I think has a lot of merit especially during the regular season. 

 

Analysis

After receiving scrim data, I was also invited to several chat groups labelled NJWS and C9, each with 7-8 people. While it was interesting to participate in the discussions, I personally didn't feel that I was adding much, and I stopped engaging in the conversations. Instead when Jordan asked someone to put together a summary presentation on Cloud 9 and Shield, I realized that it would be pretty interesting to rewatch games from an analyst's perspective. Since I already knew the general dispositions of both teams, I decided to narrow the focus to vision strategy because it was something both teams did well, and what Alliance somewhat lacked at the time. I rewatched and took detailed notes on Cloud 9's playoffs matches [8 games] and Shield's gauntlet run [10 games]. Then I converted these notes into summaries of each player and their inclinations as well as how the teams approached visions with respect to their compositions. I'm not sure how Jordan or players of Alliance used this work, but the presentations would've easily been three times as long depending on the perspective. I decided to focus on vision control, but objective-based movement and teamfight/skirmish strategies are similarly important. Separately studying these elements before synthesizing them into a cohesive team strategy is the next step for analysis in western League of Legends.

Pre-Group Stage research on Cloud 9 focusing on their vision strategy and projected picks

 

Scrims revealed to me that Alliance's biggest flaw was vision control, causing them to lose control of early dragons and allowing opposing teams to freely roam around the map. After I pointed this out, Jordan expressed that they had resolved the issue in the early game, but still struggled with mid and late game decision-making. I had been thinking about this topic a lot during my brief involvement with helping a team in the Challenger scene. I helped break down the mid-late game into objective control, vision, and map movement for the shotcaller, and offered recommendations on how to improve each. Some of these included reviewing replays with partial team vision and replaying team movement before major objectives or fights. Next I advised on pausing replays at various points in the game and having the shotcaller discuss each team composition, their strengths, and end goals, before watching the remaining replay to measure the differences between expected and actual outcomes. Other suggestions included practicing certain elements like overdoing vision control to appreciate the value of information and then scaling back the amount to increase efficiency in order to complement compositional strengths.

Pre-Group Stage research on Najin White Shield focusing on their vision strategy and map movement

 

Next, I was told that the pick/bans for each team were done the day before the games. I wasn't asked for it, but I decided to consider several pick/ban scenarios with the help from a few friends. Assuming certain bans like Zed and Rumble from Alliance's side and opposing teams to certainly ban away Irelia and possibly Zilean, I made the following presentation focusing on team compositions, their synergy, and how to approach each with vision. I didn't have the opportunity to speak with players about pick/bans or priorities, but I was hoping this would be enough based on my knowledge of their champion pools and resource allocation. Overall, I spent approximately 60 hours reviewing videos, discussing ideas, and creating the presentations. 

 

Notes on some sample team compositions for Alliance assuming particular bans like Zed, Rumble, Zilean, and Irelia

 

Issues

Outside of analysis, Jordan and I spoke a lot about how to approach the game and tournament. The first discussion revolved around expectations and mentality. At the time he was concerned about the players' doubts about their own condition and ability to beat the top Korean teams as well as the community's reaction to his confidence on social media. Drawing from my experience and my conversation with Evan McCauley, we talked about the difference between having confidence in your abilities and the issues with displaying that confidence. Also, it was my contention that the benefit of humility is that it lowers community expectation and therefore reduces the pressure from players to perform, allowing them to concentrate on the task at hand. Next, as Evan and I had discussed earlier, I pointed out methods to build confidence from reaffirming faith in their fellow teammates' skills to offering constructive criticisms in a 1v1 conversations instead of group setting. Again, these were merely discussions and I was simply offering my perspective, but it was encouraging to see support staff actively consider and attempt to solve these issues. 

You can find the discussion with Evan McCauley here: Link

 

Final Thoughts

I wanted to thank Jordan (@LeviathanLoL) for giving me this opportunity to work alongside the team and for allowing me to share my experiences. Even though Alliance still has a lot to learn, I believe that the top western teams are at least equal with Chinese teams and rapidly closing the gap onto the Korean scene. However, an influx of efficient support staff and increased professionalism are the next hurdles to furthering the western scene. I would encourage anyone with the interest or passion to reach out to members in the industry. In my experience, I have found that everyone in the scene is very receptive of new members and are willing to help and encourage growth.