Details
-
Type: Sub-task
-
Status: Done/Fixed
-
Priority: Minor
-
Resolution: Fixed/Completed
-
Affects Version/s: 4.7
-
Fix Version/s: 4.7.7
-
Component/s: None
-
Labels:
-
Documentation Required?:User and Admin Doc
-
Funding Source:Contributed Code
Description
The process of recording logging entries involves recording a row in the log table with the user_id, log_conn_id, log_date and log_action being undertaken.
When viewing the reports I have found they are flakey because there is some wierd argy-bargey around ensuring that you are getting records relevant to a particular log_conn_id within a 10 seconds. Of course if the process is slow or long 10 seconds may not be enough. The query can also have speed implications - in particular in the default scenario where the field is not indexed.
After some playing around & looking at old tickets I came across the reason for this 10 second rule. The log_conn_id field is actually only unique within the connections at the time. However, a later connection might have the same ID.
My proposal is to replace the complexity of getting data out with better data in. Instead of setting log_conn_id to connection_id() set it to uniquid() + connection_id() - uniquid is generated in php and is unique to within a microsecond. Since that is not guaranteed completely unique I propose to add the connection_id - which IS unique within a microsecond.
CRM_Core_DAO::executeQuery('SET @uniqueID = CONCAT("' . uniqid() . '", CONNECTION_ID())');
This involves changing the log tables from int(11) to varchar(24) - which might make it a slower upgrade.
Since we won't have truly unique ids from before the conversion I think we need to records in the settings the date of the conversion and consider this date when presenting logging data - ie. we still need the 10 sec argy bargey before this date.
Attachments
Issue Links
- links to