If both ends had m.dummy events queued as last messages an an olm
session corrupted, then the clients landed in an infinite game of
ping-pong. It was so stable, that the clients could have won the
ping-pong world championships!
RoomUpdate came from a time where we had no data model for
SyncUpdates but now we have and therefore this class is just
code duplication. This removes the class
and uses the SyncRoomUpdate class from
the package matrix_api_lite instead.
It needed a lot of refactoring at some places
where I also have removed some unnecessary null or type checks.
The current implementation of sortOrder can be made way more easier now
by just keeping the sortOrder of the list
and the timelineFragments in the hiveStore. This needed a huge
change but mostly removes a lot of code which can be done
way more easy now. This also needed some rewriting of the setState logic and changes to
the prevEvent calculation. This solution should also be more stable.
More information:
https://www.reddit.com/r/fluffychat/comments/pfnlhq/the_sort_order_of_matrix_timelines/
That way some concurrency bugs might be fixed, such as if two sync
requests are processed at the same time. That can e.g. happen if you
request history while a sync request is already being processed.
We should just let the `auth` object null and dont send it at the
first try and wait for the servers response. This worked in the past
but now it is broken because of changes in the
matrix_api_lite. This could also be fault for some
bootstrap issues.
I have also removed an unnecessary check if a String is a String and just made it a
null check because this was intended at this point.
Because of that this blocks uiaRequests it is a hotfix and therefore directly bumps the version.
As it turns out, some of the code set the prev_batch for rooms too
early to an empty string. For synapse this means "request from the start",
for conduit it is just an error. This commit fixes that by never resolving
null --> empty string, but instead throw an error.
The old mentionMap was very inefficient to build and scaled badly with
room member size. This resulted in noticable lag when sending any message
in a large room, no matter if it contained a message or not.
Now, the algorithm is severly optimized and mentions (and emotes) are
only loaded when actually used.
It appears that [hideEdit] in Event.getLocalizedBody was written in a way that it
assumes a valid event body. This was also fixed, while also adding tests for the
various parameters of Event.getLocalizedBody
A client might find the need to get the verification request object by
its transaction id, to be able to e.g. display for in-room verification
an "accept verification request" button easily.
We have used some data models which were only used in moor in the tests.
I needed to rewrite them in the original data as well.
Also now the "fake database" on native is the same like on web now with hive.
Previously only the first child node of a spoiler was considered to
determine if there should be a spoiler reason. This was, unfortunately,
incorrect, as soon as e.g. the reason had more than one space. This is
fixed by properly iterating all child nodes to search for the reason.
This PR adds support for nicer mentions in markdown: You can now
fetch the mention string of a user with `user.mention` which is
human-friendly (typically contains the display name), which will get
properly pillified upon passing through the markdown parser.
Due to chunked lazy sending of megolm sessions it was in theory that we encrypted two olm
messages to the same device in different futures out-of-order. Introducing locking here should
fix this (increadibly rare, so far only theoretical?) race-condition