Sign up for CleanTechnica’s daily news updates via email. Or follow us on Google News!
A recent blog post from Electric Era, a charging company that installs battery storage charging stations in the western United States, shared some interesting news out of California. The state Energy Commission announced that instead of using stop run time or station run time, or measuring how many times someone can get a charge at all, it will instead use the “Successful Charge Attempt Rate,” or SCAR. The SCAR committee defines and sets the goal:
Ninety percent of the time a customer attempts to initiate a charging session at a regulated charger, the charging session must last at least five minutes, which will be considered a successful charge under this regulation. The minimum SCAR is determined on a per-port and non-site basis; each charger at a charging site must achieve a SCAR of at least 90 percent to comply.
More specifically, every kiosk at every charging station must successfully start charging and hold it for at least five minutes 90% of the time. Being able to charge on the second or third try doesn’t cut it. Being able to charge at another kiosk doesn’t cut it either.
But, is this really the best way to measure reliability? To answer this question, we must first look at the alternatives and what their pros and cons are.
Other ways to measure reliability
One of the main ways we’ve seen reliability measured is with PlugShare. You either get a charge or you don’t. Because of this, most charging stations can be down and still get a perfect 10, as long as someone gets a chance to charge. This has the upside of allowing charging providers to rely on frequency to keep their scores high, but it leaves people looking for a charge unaware of things like broken connections, slow charging rates, frustrating plugging and unplugging and replugging hassles, and so on.
Uptime can also leave important context out. For example, a 97% uptime (NEVI requires it) sounds great, until you consider what 3% of a month is: almost 22 hours! A station can be down for almost a full day a month and still look shiny. On a yearly basis, you’re talking about up to 11 days of acceptable downtime, which is clearly disgusting. The other question is: How is uptime measured? The entire network? The entire site? Each individual kiosk? The potential ups and downs of all of these would take a long time to fully explore, but something like the average uptime of all stations across an entire charging network could leave a lot of broken machines stranding people while still giving a great number.
I came up with a more sophisticated system for measuring station reliability, which I shared in another article. Rather than trying to boil EV charging down to a single number, I proposed a set of multi-point statistics. A simple up/down number, the number of stations currently available, the run time over the past month, and then a user rating where people can give a subjective score that is averaged out. After thinking about it further, it becomes problematic because these are more numbers than someone looking for a charger would want to be looking for. It’s also possible that the subjective score could be manipulated by competing companies and their shareholders.
So, even my idea from last year kind of sucks.
Where SCAR really shines
Compared to other systems for measuring reliability and sharing that information, SCAR is great because it’s simple: it measures every charge attempt, implemented at every kiosk (and then assumes an average). But because this is measured by an agency that oversees the network, grants, and other aspects of charging, it can give an opportunity to see where things are going wrong and take a deeper look. This can in turn lead to the further study needed to make good decisions, especially those related to the distribution of government funds.
In other words, SCAR is a great system that can be used for network management at the site, network, and government level when people go to make other decisions of some kind.
Where does SCAR fail?
I think what doesn’t work well with SCAR is user-oriented reviews. If you’re looking for a charging station, knowing how many times it charges on the first try is helpful, but you don’t know if there are other issues the site might have.
For example, if you see that a site always charges on the first try, you can come in and get a half-speed charging session. You need the range and won’t want to leave, and you may not be able to switch booths because other booths are occupied. So, your charging session contributed to a great result, but it wasn’t a great charging session.
Another way a SCAR can fail is in places where people aren’t trying to use a malfunctioning charger. It is common for EV drivers to place cables up and loop them over the top of the stall to signal to other drivers that the stall is down. So, if people don’t try to charge, there will be no more failed sessions, which means the score won’t drop to indicate a problem.
I’m sure there are other issues that SCAR doesn’t expose to users looking to charge. Feel free to take a look at it in more detail in the comments or on social media.
Different tools for different jobs
For government regulators and grant providers, SCAR is a great tool to have in the toolbox. It helps set a minimum target, and 90% SCAR means a plug means charging 90% of the time. Networks that can’t meet this at the kiosk, station or network level probably shouldn’t be rewarded with more grant funding.
But for users, a multi-digit approach is still needed. Objective things, like how many times people have been able to charge, are important. But giving a station a star rating alongside raw numbers can give users a chance to see when a station isn’t delivering customer satisfaction. That gives drivers a chance to browse through reviews to see what exactly went wrong.
So, I have to conclude that SCAR is a good quantitative measure that we can use to set rules, but we need something to capture whether drivers are happy with the stop. Ultimately, that’s what we really want: happy electric car drivers.
Featured image by Jennifer Sensippa.
Do you have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Latest CleanTechnica.TV videos
advertisement
CleanTechnica uses affiliate links. See our policy here.