Efficient and Effective ACT & SAT Programs. 

We collect more data on tests and on students, in a more dynamic fashion, than our competitors. We leverage our collective twelve years of teaching in higher education to mold a highly sophisticated, yet simple to teach, program. Our data insights and outside-the-box strategies allow SAT and ACT score gains to be made more quickly, and across a broader range of students, than has traditionally been the case in the standardized test prep industry.  

Efficient = Better Data

(1) Tests:  Most companies do not take the time to track the question trends on the tests, and, even if they do, they have not continued to do so after the 2016 overhaul of the SAT, failing to realize that the SAT continues to evolve or that the ACT was itself heavily modified. 

We have analyzed all official sample tests and continue to analyze administered tests using advanced statistical analysis to identify up-to-date trends in the SAT & ACT. We feed this raw data into our strategies, modifying them to respond to test trends. In this way, we can identify statistical frequencies of question types for a (continually updated) “average” SAT or ACT -- a powerful insight when matched to the diagnostic results for any student. 

(2) Students:  There are several tranches of test prep companies. Some – generalist tutoring firms – do not test students to see which kinds of questions trip up that particular student. Instead, they teach all students the same content as if every student performs the same on test day. Where companies do administer a diagnostic test to allow targeted teaching, they often only do so at the beginning, or sporadically, as if a student’s performance does not evolve. Finally, even for companies that tout adaptive online learning (either software based or paired with tutoring), the tracking of student performance is done: i) on the basis of a small data set, ii) on the basis of questions written by the companies themselves – which will, at best, always be an approximation of the material developed by College Board and ACT test writers. 

what we do edit2.jpg

University Select’s program is heavily diagnostic-based. We only use official sample tests from the ACT and SAT as diagnostic material, and students take weekly diagnostic tests (interspersed with memorization exercises and practice sets) to identify whether the question types on which they've worked are going away, as well as what types of new statistically-relevant question types are cropping up. In short, we relay on our analysis of the best practice material available from the test writers themselves, and we commit to continual diagnostic analysis of this material in an effort to make targeted, efficient gains. 

Students view their diagnostic progress through two customized reports on our online platform. One gives a visualization of incorrect response trends, while the other digs into the details of how we categorize and bundle each of the incorrect responses.  

We describe our advanced statistical understanding of the tests along with our ongoing targeted diagnostic analysis of student performance on official sample tests as our "two-tier" diagnostic system.

How we optimize student score potential

How we optimize student score potential

(3) Evolving:  It is worth repeating as it is one of our significant differentiators: we do not just understand every publicly available SAT, PSAT & ACT test, and we do not just understand how each test continues to evolve (and they do!), but we also understand that a student's performance continues to evolve. We evolve with the student and continue to alter their customized study plan, as the student progresses, to constantly focus on the greatest score growth opportunities.

 

Effective = Better Methods

A thorough and current understanding of the tests and of our students is our starting point. Even when provided the most efficient path to maximum score gains, however, without simple yet effective methods to get students to the correct answers, scores will not budge. As former and current undergraduate, graduate and law professors, we know how to break down complicated content into manageable bites. Many of those writing test methods for companies that look like us are the Michael Jordan's of test taking. We are, instead, the Phil Jackson's: we are successful test-takers, but we are even better at coaching others to be successful on college entrance exams. This success is not restricted to those students seeking perfect scores, but all along along the entire spectrum of baseline scores.

Our methods have to meet the following litmus test: can they be applied easily, consistently, and under duress? While our program is sophisticated – data driven and research based – its aim is to provide simple solutions to get our students to the right answers and to higher scores.

It is difficult to – in a microcosm – describe the effect of combining our advanced data analysis of the tests with our backgrounds as teachers and researchers. So, here is a list of examples that - we believe - underscores how different and how much more effective our strategies can be: 

  1. ACT Science - The Science Test does not test a student's substantive knowledge of science. It is instead a test of processing speed - an assessment of how quickly a student can interpret data and text. With forty questions in thirty-five minutes, it amounts to a blitz. To beat the blitz, we crunched ACT test data going back over a decade to identify an analytics-based time management strategy that allows students to easily identify and differentially respond to passage types in a way that saves 4-5 minutes - on average - per Science Test. Reading the blitz can make all the difference in the world.

  2. SAT Reading - Instead of preaching “read more books” or “annotate the text” as our reading test method, we drew on our backgrounds as law professors to develop an automated textual analysis method that allows students to filter for the four to five words in a particular paragraph that are most important to getting a standardized test question correct. We call the method Evidence-Based Exclusion, or EBEX. For high scoring students, EBEX disciplines the tendency to overanalyze a passage, allowing them to fit into the frame required by the test writers and to weed out the last, precious handful of incorrect responses. For lower scoring students, EBEX provides the bricks and mortar for an automated reading method that allows them to push well beyond what they thought possible.

  3. SAT Math - In the SAT Math sections, we create effective score gains by bundling question types on the basis of our data insights. If a student gets twenty questions wrong on a diagnostic, we aren't going to talk about twenty different things. We are instead going to teach the four or five shared math principles, the repeated issue of recognition, or the standardized test strategy that narrows the playing field for the student. Once viewed in our propriety bundled system of question types, SAT math becomes very repetitive -- it becomes an exercise in pattern recognition, or in our words a "cue and response". So fine. We bundle things. And we've thought a lot about how to put things together. But we don't then go after every bundled question type in any order. Through an algorithm on our platform, we match the frequency with which a student gets certain question groups incorrect against their frequency of occurrence on an average SAT. In this way, we identify efficient score gains, allowing us to narrow still further the math we have to cover to boost a student's score. We don't even stop there. For SAT Math, not only do we tell students how frequently their incorrect response question groups will occur—we tell them to a high degree of accuracy where on No Calculator or Calculator sections these questions are likely to occur.

While our program is sophisticated, we provide simple solutions to get our students to the right answer and higher scores.