For synchrophasor measurements (the target use case), you usually always want a set of all measurements from a given PMU to calculate values from the fly. So changing to ordered by point for far archive would still not be viable. You really need 2 separate archives and query one or the other depending on how many points you are trying to scan.
Here are 2 use cases where each other suffers:
Assuming a 1TB/year data repository broken into 365 distinct files at 3GB each file
Scanning 1 point for an entire year.
By Time: That would require a sequential scan of all of the data for that year. Response Time: Hours.
By PointID: That would require require scanning 365 files (assuming they are still partitioned by 1 file per day) for 1 point. Response Time: Seconds+.
Scanning all points for a given time:
By Time: Seeking in 1 file for a few blocks of sequential data. Response Time: Milliseconds.
By PointID: In 1 file, Seeking 50,0000 times. Actually, it will probably be faster to read the entire file sequentially since seeks are really slow. Response Time: Minutes.
So, we decided that for synchrophasor data, the best choice would be order by Time. For SCADA Data, it's really a toss up, but the scan speed is so fast it should still be reasonable to sort by Time. For AMI Meter Data, it depends on the use case, web interfaces for a single customer would be terrible, but analytic engines that look at the entire sample set would be detrimental. Imagine randomly seeking millions of times to get today's meter reads.
Also, You might be missing one key feature. Have you tried down sampled seeks? If you are storing 2 second SCADA data, and wanting to go back 1 year, do you really plan on displaying 15 million measurements to the client. Rather you should probably request 1 sample per hour and display 8000 measurements. Then when the user selects a month, display 4 measurements per hour. Then when they select a week, display 1 measurement per minute, then when they select a day, show the full resolution data. That will execute much quicker and was the design case for the openHistorian. You can also build down-sampled databases, in which case all of these type of queries would return in milliseconds. That's what we have done at OGE for our software.