Description
solid_cache
uses a couple of queries that look like this:
Entry.largest_byte_sizes(samples).pick(Arel.sql("sum(byte_size), count(*), min(byte_size)"))
Performance-wise, it is the best approach since you can fetch multiple values within one query.
But since pick
uses limit
under the hood, the sqlserver
adapter has to add an ORDER BY
clause to use OFFSET
. This by default is ORDER BY [id]
, which in the case above throws an error:
ActiveRecord::StatementInvalid: TinyTds::Error: Column "solid_cache_entries.id" is invalid in the ORDER BY clause because it is not contained in either an aggregate function or the GROUP BY clause.
I looked a bit into make_Fetch_Possible_And_Deterministic
, where this ORDER BY
clause is added. In theory, it should be possible to find out which columns are referenced in the SELECT
statements, by looking into o.cores.first.projections
. I think there are two cases: Either the projection is already an Arel::Attributes::Attribute
or a String
.
If it's a string, we would have to extract the column name in order to get a Arel::Attributes::Attribute
from the Arel::Table
. we could separate it by each ,
, then check if any known column (using the schema_cache
) is mentioned, likely using a similar regex as in query_requires_identity_insert?
where the identity columns are being searched.
I am not sure if this is a smart approach. So I wanted to have your feedback before actually looking to implement this.