One of the biggest and most difficult issues is that a FLOP is not a FLOP no matter how much you would like to think they are the same. The same goes for I/Os, RAM requirements, Integer operations per second, network bandwidth, etc. For example, an Android device ranks very high in integer operations per second and often is faster than some older PCs. Yet, the Android device takes 10 times longer to finish the work unit that the old PC. It's almost impossible to have a scale that is measured by the client that results in a fair credit because you end up comparing apples to oranges. So, how do we fix it? For projects with fixed length work units, it is easy. The project admin uses some type of standard machine and cranks out a dozen or more work units and then calculates the amount of credit for the work unit. Then all other hardware regardless of how well or how poorly it can run that project compared to others gets the same credit per work unit. In other words, you might get more credit on project X than project Y simply because your hardware is better optimized to run on project X. We've seen that for years with Intel vs AMD processors and even to some extent Windows vs Linux.
So... if we change to a brand new credit scheme, do we do away with all previous credit and start over? If not, is it fair to have a new system that does better or worse than the previous one? For example, if the credit system IV ends up doubling the credit on project X, should all the previous users credit be doubled so as to be fair? Or, if the new credit on project Y is only half as much, should all the previously accrued credit be cut in half in order to be fair to the current volunteers? It's a can of worms.
Is creditNew broke? IMO, yes.
Can there be cross project parity? No. The algorithms and hardware requirements vary too much from project to project.
Can the exact same credit system work for all projects? No. Projects with variable length work units will have to have some way of calculating the credit based upon the hardware and the length of time it took to complete the work. 2 hours on an old cell phone is not the same as two hours on a high end GPU or a 24 core server. And, two hours on one project with some GPU is not the same as two hours on another project with the same GPU where the GPU load may be twice as high or the RAM requirements be 10 times as high.
For now, Collatz work units are close enough to the same size that they can use fixed credit per steps but that is changing as the sieve algorithms get more efficient. Some have found that doing less science by using a smaller sieve size actually gets them more credit. As the sieve size increases, it will be harder and harder for older hardware to keep up. If I wanted to run on project X which really heats up my CPU vs some other project where it runs much cooler, shouldn't I get rewarded for that with a little extra credit since my hardware will likely fail sooner and the cost of electricity to run (and cool it) is much higher? I guess what I'm saying is that everything is based in FLOPS now and that's just the tip of the iceberg. There are numerous factors that have to be considered and now all will pertain to all projects making it almost impossible to compare them to each other. It's hard enough trying to make the credit fair across hundreds of CPUs, GPUs, and operating systems much less compounding that by 50 for each project.
Summary: I can tell you what's wrong with the current system but I don't have a solution that will be better for all projects, at least not a K.I.S.S. solution.