TaskSource Co-Op Overview Whitepaper Contact: abouttasksource@gmail.com Table of Contents Problem Statement Co-op Structure Membership Governance Payments Benefits Privacy and Security Data Integrity Requester Experience Guidelines Requester Dashboard Pricing Funding Conclusion Table of Figures Figure 1 - The tiers of membership at TaskSource Co-op , page 9 Figure 2 - The functional subgroups of TaskSource along with their weighted importance, necessary for the labor input formula , page 12 Figure 3 - The inactivity flow for a user that is clocked in to their account for their work day , page 15 Figure 4 - An image of a gamification CAPTCHA with various images of items on the left and a plate on the right with instructions to “Put food on your plate” by dragging and dropping food items that onto the plate , page 21 Figure 5 - Overview of the requester experience on TaskSource , page 23 2 Problem Statement Crowdsourced data platforms provide access to a large workforce of people that can complete various virtual tasks. Common tasks found on crowdsourced data platforms include data validation, survey completion, content moderation, and audio transcription. Markets that seek out crowdsourced labor often include social media sites, social science researchers, and artificial intelligence researchers. However, though workers spend time and effort completing these requests, their compensation falls below standards of minimum wage, with some sites allowing a minimum payout as low as $0.01. This paper seeks to highlight the concerns of utilizing such platforms and will propose a blueprint for a reimaged alternative -- a worker-owned crowdsourcing data cooperative. To date, the most widely used crowdsourced data platform is Amazon Mechanical Turk -- a marketplace through which a virtual workforce can be recruited by requesters for the completion of HITs, or Human Intelligence Tasks. Notable requesters include the Allen Institute for Artificial Intelligence, Pinterest, and WikiHow. Requesters are permitted to decide how much they want to pay workers for assignments, with the mandated minimum payout being just $0.01. Permitting such low payout on the site has led the average hourly salary of MTurk workers to fall to $~2/hr. 1 Despite the lack of a livable wage, many workers turn to crowdsourcing sites for the bulk of their income. A study conducted by the Pew Research Center analyzed a week of requests and quantified the industries that tasks come from as follows: "36% of the unique requesters were either academic groups, professors or graduate students. That was slightly more than the 31% which were businesses. Identifiable nonprofits were barely represented at 1%." 2 Academic researchers make up a significant portion of MTurk requesters. Though a significant portion of online studies are run on crowdsourcing platforms, data quality and the validity of obtained results is often prioritized over fair pay and the experience of users. Though academic researchers turn to the platform for quick access to a diverse population pool, a survey of MTurk workers against the 2015 Census conducted by The Pew Research Center reveals the platform's population is more homogeneous than the true population of adult workers. The survey found that Turkers are more educated than working adults when compared to the 2015 Census, with 51% of MTurk workers reporting having college degrees compared to 36% of adult workers. 74% of MTurk workers reported living in a household earning $75,000 or less, compared to 47% of adult workers. 77% of Turkers indicated they are white and 23% identified with other races — in comparison, 65% of the working population is white and 35% is of another race. Most notably, 40% of workers were found to be located in the US and 2 “Research in the Crowdsourcing Age, a Case Study.” Pew Research Center: Internet, Science & Tech , 11 July 2016, www.pewresearch.org/internet/2016/07/11/research-in-the-crowdsourcing-age-a-case-study. 1 Hara, Kotaro, et al. “A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , 2018, doi:10.1145/3173574.3174023. 3 approximately 30% in India. The large number of crowdsource workers from India can be explained by the population's strong English language skills coupled with limited job availability. Additionally, another issue workers face involves qualifying for work once on a platform. When it comes to workers qualifying for surveys, MTurk has no isolated screening process; rather, the screening process to vet workers for survey qualification is typically integrated into the survey itself. The lack of pre-screening disadvantages both researchers and participants because researchers will either over-include participants or have to filter through survey data themselves and reject candidates that filled out the survey but did not actually qualify. In addition, workers are often likely to participate in several surveys that are related or released by the same researcher for similar experiments, leading to data non-naivety problems, which relates to a user's ability to guess answers from familiarity with methodologies. This can impact a researcher's ability to obtain truthful and meaningful results. Mechanical Turk is not the only platform that profits from crowdsourced data - additional crowdsourcing platforms include CloudResearch (formerly known as TurkPrime), which is a platform marketed towards online researchers that advertises a flexible payout set by the researchers — "Pricing is set by you, the researcher, based on the complexity of the task assigned. Typical per-worker payments fall from $0.02 to $2.00." 3 . Another site, Positly, is marketed towards improving the experience of researchers seeing crowdsourced labor on Amazon Mechanical Turk through better control of their data. Currently, the platform that provides the highest payout standard is Prolific, which mandates that researchers reward participants with at least $6.50 an hour. A survey of online user testimonies showed that that workers of online crowdsourcing platforms share many of the same concerns. The first one is a grievance against the low-pay and lack of consistent work, which causes an unreliable income stream. The second concern is a complaint against the amount of uncompensated time spent waiting and searching around for their next task. Additionally, while existing platforms are primarily used for academic research and training of machine learning algorithms, workers have sometimes run into controversial and invasive topics within the tasks they complete, often with little to no warning of the task's content beforehand. Some platforms even give specific disclaimers that they are not responsible for what users will encounter, such as Prolific’s Academic Participation Terms, which state that “your Submission and completion of each Study will take place on whatever study platform has been selected by the Researcher, and not on our Website. We do not moderate Studies and are not responsible for their content. Any participation by you in any Study is at your own risk.” 4 Despite this, some workers have recalled running into gruesome visuals of mutilated bodies, botched surgery 4 2021. Prolific Academic Participation Terms. [PDF] Prolific, p.3. Available at: <https://www.prolific.co/assets/docs/Participant_Terms.pdf> [Accessed 30 March 2021]. 3 “CloudResearch Plans & Survey Cost Calculator.” CloudResearch , 10 Dec. 2019, www.cloudresearch.com/pricing/. 4 videos that spared little detail, and even what seemed to be child pornography 5 as they completed tasks on platforms. Additionally, in some tasks, workers were also asked to share deeply personal information ranging from their Social Security number to a traumatic experience in their life. Some workers have even experienced requests that appeared questionable, such as sending photos of their feet, with no explanation about what the photos would be used for. 6 All of these tasks often pay no more than several cents. In Amazon Mechanical Turk's Acceptable Use Policy, there are statements reflecing that policies are put into place to moderate certain content (such as "content that constitutes child pornography, relates to bestiality, or depicts non-consensual sex acts" 7 ). However, based on the above testimonials of controversial content encountered on Mechanical Turk by current and former workers, it is unclear how this content moderation is being done. If potentially illegal and traumatizing content is still being surfaced to workers despite policy claiming otherwise, there are definitely not enough protections in place for workers and their mental and emotional health. Lack of content moderation is not the only way workers are left vulnerable - oftentimes, existing platforms exhibit preferences towards requesters. This is not always favorable, as requestors sometimes provide inaccurate time estimates on the length a task will take - for example, a requestor might provide a lower time estimate than it actually takes to complete a task. This is harmful - since workers are paid based on the completion time estimated by the requester, this can not only waste workers' time that could be spent on other tasks, but also contributes to workers being further underpaid for the amount of labor they put in, as they will not be appropriately compensated for the time it actually took them to complete the task. While workers can make a report to the platform when they believe a requester violates the rules, these complaints are often dismissed or remain unheard by the platforms, allowing requesters to stay unaccountable for their actions. Additionally, requesters are in charge of rejecting or approving worker submissions, where an "approval" means that the requester received satisfactory data and the worker will receive payment, and a "rejection" means that the requester does not feel the worker's responses were adequate. If a worker receives a rejection on a task, regardless of the time spent on it, they do not receive payment. The subjectivity of rejection is one that is heavily discussed and debated on online forums of virtual workers - in this process, requesters have the ability to invalidate minutes, and even hours, of work with just a few clicks. Fear of rejection, general unreliability, and difficulty navigating the platform are a few reasons why workers on the Mechanical Turk platform have resorted to using third-party extensions and scripts to optimize their time while using the platform. Examining what tools are available reveals a lot about user desires and needs for the Mechanical Turk platform, as there are a myriad of options, all of which enhance the platform or add features that the platform does not currently offer. Extensions such as MTurkSuite and TurkOpticon track crowdsourced ratings of workers' experiences with requesters, allowing workers to filter out requesters with lower 7 “Acceptable Use Policy.” Amazon Mechanical Turk, www.mturk.com/acceptable-use-policy. 6 Mehrotra, Dhruv. “Horror Stories From Inside Amazon's Mechanical Turk.” Gizmodo, 28 Jan. 2020, gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041. 5 Mehrotra, Dhruv. “Horror Stories From Inside Amazon's Mechanical Turk.” Gizmodo, 28 Jan. 2020, gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041. 5 approval ratings. Block requesters allows you to mark requesters with an "X" so they don't show up in search results. There are also services to get around requesters attempting to deceive workers, such as requester ID and Auto Approval Time, which informs workers what a requester's ID is in the event that they try to change their name, as well as the task's auto-approval time. Furthermore, there are some tools that simply make spending hours on the site a better experience - AutoPager allows users to automatically load the next search results page to minimize clicking, while Pending Earnings displays pending earnings that have not yet been approved and displays the total on the Mechanical Turk dashboard below current total earnings. Another prominent use case that some extensions cover is a concept called "Pandas", which stands for "Preview and Accept". When a user initially clicks on HITs on their dashboard, they are taken to a preview page, where they need to accept the HIT in order to actually claim it. Extensions have discovered a workaround where rather than going through the Preview and Accept pages, there is a way to bypass that by inserting a specific URL parameter that expedites the process. Extensions like Panda Crazy collect these Preview and Accept page instances for workers at a certain cycle to prevent users from getting marked as bots while still allowing them to be efficient with their usage. The presence of these scripts and extensions to boost the experience of the most-used crowdsourced data platform, Mechanical Turk, reveals that there is still a long way to go to make crowdsourced data platforms a pleasant experience for workers from a usability and productivity perspective. With workers running into issues such as lack of consistent work, unmoderated content, unfounded survey rejections, and having to seek out third-party tools to boost their experience on platforms, there is tremendous opportunity to improve the process of crowdsourcing data. With research and analysis, it seems that a root of the issues that crowdsourced data workers experience is due to their lack of empowerment and ownership of their employment. If workers could have a deeper sense of connection to the platforms they work on, as well as a say in how the platform operates, the power would fall to the workers, who could advocate for themselves and each other to produce a more positive, productive, and constructive working environment. Additionally, researchers would also benefit from these initiatives - workers having better experiences using platforms would create a higher quality of data, as they would feel more incentivized to contribute on a deeper level. With this in mind, this paper is intended to give a general outline of a potential solution to the issues crowdsourced workers face through the means of a hypothetical alternative platform. Co-Op Structure The alternative platform this paper presents is rooted in the idea of worker ownership and community. These ideals are best served in a cooperative structure, frequently known as a co-op. A cooperative can be broadly defined as an enterprise that is owned and managed by the people who work in it. 8 These enterprises are driven by values, not just profit, where members have equal voting rights and act together to create better outcomes for everyone involved. 8 Cambridge Dictionary. https://dictionary.cambridge.org/dictionary/english/cooperative 6 The co-op structure is a good fit for crowdsourced data because the governance structure actively works to eliminate many of the problems discussed above. The co-op structure was motivated by how the current company-contractor relationships and incentive structures of existing survey data platforms exploits workers, produces low-quality data, and is ridden with bots. A cooperative, which is an entity that is owned and democratically controlled by its members, is organized in a way that removes the relationships and structures which create these inequities and issues. Currently, workers are classified by crowdsourcing data platforms as independent contractors. As worker-owners, the profit of the co-op platform, which will be referred to as TaskSource throughout the paper, is created by the labor of the workers and is thus shared by the workers. TaskSource will be able to charge survey requesters enough to qualify as a living wage because the data generated on the platform is of a higher quality. This is because the incentive structure is based on ownership, time, and contributions rather than what researcher tasks are completed. Models based solely on task completion are not conducive to understanding tasks, processing information, and answering with logic and honesty. Additionally, it’s not just the incentive structure which generates bad data on current platforms -- it’s also the large number of bots who complete tasks, many of which have become sophisticated enough to pass the attention checks placed in the middle of tasks. When workers receive a fair share of the total co-op’s profits, when a platform is built around worker community and quality that prevents bots, when a time rush is not placed on workers for their tasks, and when workers own what they create, the quality of data will inevitably increase. Quality of data is not just important to surveys asking for research data but also for datasets that will eventually be used to train AI models. Over the past few years, the world has seen exponential growth in AI use. With this growth, implicit bias is making its way into algorithms that are starting to have a greater and greater impact on the world. There are two main ways bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. 9 Both of these can occur and are occuring in existing crowdsourced data. 10 Existing crowdsourced data platforms have a homogenous worker base. Not only are they from the same demographic categories but they also interact with the site in the same way. This is not representative of real life. 41% of users on Mechanical Turk are aged 18-29 but only 23% of all U.S. working adults are in that age group. 6% of users are Hispanic while it’s 16% in the U.S. 51% have a college degree but only 36% do in the broader population 11 . These numbers are not representative of the actual population that will be affected by the outcomes of these studies and the technology that they create. In addition, a homogenous population is very likely to share 11 Research in the Crowdsourcing Age, a Case Study. Pew Research Center. 2016. https://www.pewresearch.org/internet/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/ 10 Raji, Deborah. How our data encodes systematic racism. MIT Tech Review. 2020. https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opini on/ 9 Hao, Karen. This is how AI bias really happens—and why it’s so hard to fix. MIT Tech Review. 2019. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-h ard-to-fix/ 7 similar preconceived notions or prejudices. This lack of diversity in the worker pool is detrimental to the creation of diverse, representative data and consequently will also hurt the results of research. A worker co-op can aid in solving the problem of implicit bias, as it is typically a small subsection of the broader population that is able to work from home on a computer all day for a few cents per task. Many workers may have children to look after and essential jobs to attend to outside of the house. Several workers cannot afford to earn the unlivable wages of existing data platforms. Current crowdsourced data workers typically can spend hours searching through tasks for no pay and often have savings or additional income to support them as well as few responsibilities outside of the household. This results in a homogeneous user base on current platforms where workers are likely from the same demographic backgrounds. However, when a platform can offer a living wage, flexible hours, and an easy-to-use and intuitively designed interface, more people can afford to prioritize it as a source of income, introducing workers with diverse backgrounds and perspectives to new opportunities that previously may not have been as accessible to them. This in turn can solve the homogeneity issue current crowdsourced platforms encounter and will create less biased datasets that are better quality. “We’ve already inserted ourselves and our decisions into the outcome—there is no neutral approach. There is no future version of data that is magically unbiased. Data will always be a subjective interpretation of someone’s reality, a specific presentation of the goals and perspectives we choose to prioritize in this moment 12 .” Membership Becoming a Member Participating in community-building on TaskSource is an opportunity available in various ways to users, such as through discussion threads, and is highly encouraged. However, a user generally has the autonomy to decide if they want to engage or not. Users are identified as any workers who have not formally registered and applied to be part of the co-op as a member. Just like the ability to interact in discussion threads, official membership of the co-op is another feature of TaskSource. While all members have fair access in attaining membership status, it is ultimately opt-in and does not bar users from completing tasks on the platform. In order to become a member on the platform, users are required to have a minimum of 300 clocked-in hours on the platform. And express their interest in membership through the completion of a membership application. The membership application will introduce the user to 12 Raji, Deborah. How our data encodes systematic racism. MIT Tech Review. 2020. https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opini on/ 8 the coop structure of the platform so that new members can make better informed decisions when they vote in polls. This will happen through an interactive module that asks the user a question after each few sentences. The membership application will involve the questions “How many hours a week do you use the platform”, “will you contribute to threads?”, “will you vote in polls?”, “Are users allowed to post personally identifiable information in threads?”, in order to gauge a user's level of interaction with the platform and familiarity with the thread participation guidelines. The membership approval process involves the membership application to be verified by a member assigned to the task of membership verification. If a screener rejects an application, they must document a reason. After the user is approved for membership, they will be able to vote in polls that affect the direction of the co-op, and apply for functional subgroup responsibilities. To gain functional subgroup responsibilities, members must have demonstrated contributions to the site through at least 150 thread posts and 500 clocked-in hours on the platform. When a member applies to be part of the functional subgroup and is accepted, they become worker owners and receive an added bonus from the pooled funds. In order to take on this responsibility, members will fill out an application of interest which will include questions like “what skills can you bring to the co-op?”. As well as an interactive module which will introduce the member to the responsibilities of all the functional-subgroups. After this application is approved by a vote of 4 out of 5 members assigned to the task of membership vetting, the user will gain access to on-boarding materials, and the TaskSource coop documentation which will serve as a resource to guide the member in accomplishing the tasks for their currently assigned rotation (see Governance). Figure 1: The tiers of membership at TaskSource Co-op. Community Building 9 In order to create a co-op that is truly diverse in perspective and therefore data, the platform and community should intentionally factor in inclusive practices every chance possible. The platform itself should utilize inclusive design that is accessible to a variety of users - examples of inclusive design are utilizing color contrast ratios of at least 4.5:1, providing descriptive image captions, and encouraging the tasks researchers submit to be optimized for accessibility. Additionally, the design of the platform should not be overly complex in order to appeal to people with various backgrounds and familiarity levels with technology. Thoughtful design that is inclusive and accessible will deeply benefit both the co-op as well as participating researchers, and will reflect a more accepting environment overall. Within TaskSource's online forums, in order to create a safe and welcoming environment, the platform will utilize pseudonymity for users. Users will have the ability to set a username, with no other personal user information surfacing to other users. Users will not be able to set profile pictures from their own images, but instead will be able to have a variety of preset avatars that they'll be able to choose from; this will take pressure off of finding a profile picture for the user while still creating a customizable feel, and will also eliminate external bias or assumptions drawn from personal image choice. To further allow personalization, users on forums will be able to set up themes on their forum posts and have personalized profile pages with customizable templates and thought-provoking questions that can be featured on their pages (ex: A life goal of mine is ____). Outside of user identity, another valuable aspect of online community-building is moderation and expectations created from the community itself. To make sure that TaskSource's online community remains respectful and inclusive, it is important to take steps like having a single page of community guidelines, written in inclusive language, with clear courses of action for violations (for example: “First-time offenses will result in a warning, second-time offenses will result in a week-long ban from the platform, and third-time offenses will result in a permanent ban”). Additionally, with any widespread issues in the community, moderators should deliver clear, transparent, and frequent communication detailing what issue happened, when it happened, who might be affected, and what solutions or courses of actions were taken for resolution. Another system that can incentivize community-building online in TaskSource's forums are the use of reputation systems related to how active a user is and how often they contribute to actions such as co-op voting and working in subgroups (see the Governance section below for more information on these). With reputation systems, these can be tied directly to higher investment in the community, and users with higher "reputations'' can be given greater access to responsibilities, such as moderator privileges on the forums. Moderation on Threads Before users can post on threads, they must attest that they have read the rules, which will be enforced by moderators. The rules are enumerated below: 1. Do not post or attempt to obtain personally identifying information about a user. 2. Do not post any offensive, obscene, or illegal material. 10 3. Do not insult or harass other users. Engaging in racist, sexist, transphobic and generally discriminatory language will not be tolerated. Use of obscenities and foul language may also be removed. 4. Do not post advertisements. Content that is irrelevant to the thread’s discussion may also be removed. Moderators will enforce the rules by informing users they have violated one of the rules. If a user receives 3 warnings, the user will be muted for 10 days. A mute will prevent the user from participating in threads. If a user is warned 5 times, a collective decision will be made to ban the user through a vote. The vote will require at least 5 moderators to vote on the issue and a ban will ensue if 75% or more of the votes are in favor of a ban. If the moderators decide against a ban, subsequent offenses by the user will be immediately reconsidered for a ban and a vote will be taken again. Applying for Moderation To become a moderator, a user must be a member of the co-op. The user will fill out a short application which must be approved by another mod who can review the post history of the user. Before a user is given moderator privileges, they complete a short module which trains them. Governance Democratic governance lies at the heart of every co-op. However, there are numerous ways to structure this democracy. The two most popular methods in existing co-op organizations are direct and representational. This section will outline the advantages and disadvantages of each and ultimately lay out TaskSource’s direct democracy. The initial thought was to have a governing board, elected by the general membership on a one-vote-per-member basis. However, there are many ways in which a governing board can form competing interests with the general membership, diluting the idea of a worker-owned and operated organization. 13 To mitigate this future conflict, board seats could instead be filled on a rolling, rotating basis. This could take two forms: 1) there are elections, where each election must have a fresh slate of candidates or 2) there are no elections, and instead each member takes a seat on the board. The first option falls victim to the same issues seen in democratic nation-states across the globe. Cliques (parties) can form, workers can form interest groups, and competition can ensue. The second option eliminates representational democracy but instead, as the co-op scales, members will have to wait longer and longer for their turn to contribute. In addition, any board, which sits above the general membership, creates a 13 Hunt, Gerald Callan. Division of Labour, Life Cycle and Democracy in Worker Co-operatives. 1992. 11 hierarchical management structure. Hierarchies have many flaws and in a truly democratic and equitable establishment, a flat organizational structure is a key characteristic. With the concept of a board eliminated, the co-op needs a way to organize tasks and functions through a flat management structure. This can be done through a functional division of labor. Using “functional divisions of labor” means dividing the necessary maintenance, growth, and management tasks into specialized subgroups. An example of this would be having subgroups for areas such as finance, IT, or marketing. A functional structure typically exists in a hierarchical management structure where each subgroup has an executive which reports to the CEO. To implement functional subgroups in a flat management structure, subgroups all sit on the same level and communicate with each other. This tends to promote interdependence and fosters a more collaborative approach to decision-making compared to a divisional design. 14 Figure 2: The functional subgroups of TaskSource along with their weighted importance, necessary for the labor input formula. With this subgroup design, it is important to note that a division of labor that has highly fragmented, narrow, skill-reducing tasks associated with status and monetary differentials, or that allocates managerial functions to a sub-entity, produces a managerial elite. This elite group 14 Hunt, Gerald Callan. Division of Labour, Life Cycle and Democracy in Worker Co-operatives. 1992. 12 in turn becomes isolated from the rest of the members and grows to have conflicting views, recreating the lack of worker control that exists in traditional organizations. 15 As mentioned previously, job rotation is a solid mechanism for maintaining democratic control. No singular individual or small group holds all the information or power. In addition, job rotation optimizes for generalization. A generalized workforce is more democratic than a specialized one because more people will have the understanding necessary to weigh in on decisions. InTaskSource, job rotation would occur on a monthly basis. This allows enough time for people to become comfortable and knowledgeable in the role without creating a base of power. To facilitate the efficient transfer of roles, standardized and comprehensive knowledge transfers must be the cornerstone of this policy. Documentation, for each specific subgroup, will act as the living history of the co-op. Documenting all decisions and discussions can become tedious if someone has to write it up after the fact. Therefore, to keep things moving quickly and to lower the burden of work, documentation should be an active exercise in which the functional subgroups engage in. A standardized structure for meeting notes, discussion notes, and consensus-making processes will allow documentation to be filled out quickly and readable by anyone in the co-op. Voting and Decision Making Employing democratic voting on TaskSource will help centralize a largely decentralized platform of many users. The platform will employ community decision making through the use of collaborative decision making software tools. The implemented features will include polling, which allows registered users to click to vote on an issue. The polls will have an associated thread board, giving users space for open discussions surrounding the polls. The contents of the polls will be proposed by the managerial subgroups of the co-op and then opened for a democratic vote by the larger pool of members. To help facilitate the managerial subgroups' proposal process, policies will be enforced to aid in streamlining the process and controlling for voting stalemates. This involves enforced deadlines, which will merge a proposal in the state that it is in when the date of the deadline transpires, and default renewals for established policies in which a consensus is assumed unless an objection or request for evaluation is made. For policies that require a renewal, an expiration date will be highlighted so that it is clear the policy requires attention, and all renewed proposals will be discussed. Payments This section will cover the co-op’s finances and how payments will be distributed amongst the subgroup and co-op members. It also will dive into how working hours are defined in the co-op, 15 Cornforth, C. (1983) 'Some Factors Affecting the Success or Failure of Worker Co-operatives: A Review of Empirical Research in the UK', Economic and Industrial Democracy. 13 preventative measures to avoid abuse of a clock-in/clock-out system, and potential for internationalization opportunities on TaskSource. Financial Overview All revenue within TaskSource is initially pooled into the main co-op fund. This fund is then allocated across the co-op in the following structure: ● Co-op maintenance - 30% ○ IT (website domain and hosting) - 10% ○ Worker-owner benefits - 10% ○ Financial software - 5% ○ Voting software - 5% ● Worker-owner payouts - 65% ● Community enhancement - 5% ○ Skills training ○ Community building events Payment Overview A co-op’s profits are allocated among the members on the basis of how much labor they put into the co-op. In other words, their contribution to the co-op is reflected in their profit allocation. Moreover, the total payment is calculated based on the amount of work performed both on the platform, time spent on surveys, and for the platform, time spent in functional subgroups. Workers will be paid on a monthly basis, and in the interest of transparency, payouts and fund status will also be published internally after each pay period. The payment formula is based on a working hour wage. Defining “working hour” is an important consideration because it must take into account the many forms of labor that can occur when logged into the platform. In determining this formula, it became clear that time spent completing surveys is an inaccurate measurement of labor input. This measurement alone does not account for scanning and pre-reading surveys to find which are a good fit, answering pre-survey questions, taking care of administrative tasks like updating account information, or reviewing background task-specific information. These are all forms of labor that should be fairly compensated, yet no existing crowdsourced data platform is paying workers for the time spent on these tasks. In order to incorporate these forms of labor into the compensation formula, a “working hour” will be calculated based on automatic clocking in/out facilitated by technological systems that measure inactivity. More on this can be found in the Working Hour Overview section found below. Working Hour Overview With a shared money pool and distributed workers that are paid based off of a combination of time investment, co-op contributions, and survey completions, how a "working hour" is defined is 14 pertinent to ensuring that users get paid for being present and accounted for, especially on slower days where not as many surveys might be available for them. Figure 3: The inactivity flow for a user that is clocked in to their account for their work day. With a basic Clock In/Clock Out system, there is a risk that users may take advantage of it and deplete funds for everyone else in the co-op, especially due to the guise of anonymity on the internet. Since funds are pooled across all workers and their contributions, generally there is incentive to not do this, but regardless, users may still try to abuse the system, or may do so unintentionally without understanding the co-op model. With so many factors, the most practical solution is having the platform itself enforce clock in and clock out policy. When users clock in, the system will check for inactivity every 30 minutes, where "activity" is defined by interaction with the page - clicking, scrolling, etc. Before the user times out in the 30 minutes, at the 27-minute mark, they are given a warning on the screen with a randomly generated logic question CAPTCHA. If the user fails to complete the CAPTCHA correctly or does not complete it at all, they are logged out of the platform and clocked out immediately. For this scenario, users would only be able to log back in to the platform after 30 minutes. If a user fails to remain active 15 and is booted for inactivity 3 times within 24 hours for any reason, they will not be able to log in for another 24 hours since the most recent time they were booted. In the event that users have surveys populated in their feed, the platform will check for inactivity as defined by a 20-minute period where a worker has not started any of the surveys on their feed. At the 17-minute mark, a user will receive a randomly generated logic question CAPTCHA. If the user fails to complete the CAPTCHA correctly or does not complete it at all, they are logged out of the platform and clocked out immediately. For this scenario, users would only be able to log back in to the platform after 20 minutes. If a user fails to remain active in this scenario and is booted for inactivity 3 times within 24 hours for any reason, they will not be able to log in for another 24 hours since the most recent time they were booted. Working Hour Wage Calculation Workers will be paid a flat rate per minute while clocked in based on co-op income for each pay period, which is every month. The general working hour wage for the co-op will incorporate payments for completed tasks, which will be double the wage per minute. Example calculations for a worker’s working hour wage can be found below (with an example wage per minute of $0.30). Wage per minute while Clocked In = $0.30 Wage per minute while completing survey = Wage per minute while Clocked In * 2 Working Hour Wage = Wage per minute while Clocked In + Wage per minute while completing survey Measures of Labor Input Calculation Aside from clocking in and fulfilling tasks on the platform, workers will also be evaluated and compensated for contributing towards co-op functions by participating in subgroups. Below shows an example calculation of how a worker’s input in a subgroup, combined with their wages earned from completing tasks, can result in a general labor input calculation for each worker. Wage = Working Hour Wage Contribution = (hours worked in subgroup 1 * weight of subgroup 1 ) + (hours worked in subgroup n * weight of subgroup n ) Labor Input = wage + contribution Allocation of Capital Calculation Once the lab