This can improve the start-up time of apps.
The three big db reads on init are
loading account data, rooms and
device keys.
This makes it now possible to let
them run parallel
(while it may depend on platform
if this has any effect)
and the init() method can skip
awaiting them. They will
be at least awaited before handling
the first received sync.
So the app can already display the
room list before device keys are
loaded and request the first sync
from the server before anything
else is loaded from the DB.
Apps had a hard time to just set
the marker for the last event.
The lastEvent in the Room may
not be the actual last event
because we ignore several
event types there. Therefore
it makes sense to refactor
the setUnread method.
Now the timeline class has an
easy method to set the read
marker to the last synced
event, which can only be
known by the timeline if we
want to avoid another DB access.
This makes it finally possible to
use Flutters AnimatedListView with
our Timeline class and in web we
can now update single elements
instead of the whole timeline
on every change which should
be quiet good for the
performance
fix: add room invite update to roomStateBox, so invites don't show empty room when app is restarted
Closes#228
See merge request famedly/company/frontend/famedlysdk!865
Room states are ignored if the event with the same event ID
is already known in the database. But
because of the event is stored in the
database and after this
setState in the Room class is called,
an event is always "known" and
therefore auto updating was broken.
Events with a status of 1 should be sorted in the normal timeline.
They should not be stucked at the bottom. This fixes a bug
where a limited timeline flag
can stuck a SENT event at the bottom of
the chat forever.
When requesting history the `start` parameter could become larger than the loaded events
from the database were, resulting in an error when attempting to request history.
Just using the .init() method to wait for the client
to initialize is a more easy way than listen to onLoginStateChanged.
But by default it waits for the first sync.
This should be configurable.
If a box is corrupted the clear function fails on it. Then
we should delete the box from the disk.
Currently we use the Hive.deletefromDisk() method which does not
work because it deletes only open boxes, but the box is obviously not open in
this case.
Previously we had a check which uses the old
sortOrder value.
This check has been removed with the refactoring which leads to
bug #209. This fixes it by checking if the
event is already known in the database.
I am not 100% happy with this solution as this database api is impossible
to be implemented with a sqlite db. Once we start to refactor the whole sync update logic
we maybe could find a better way, but only the fox god knows.
Previously when using RoomUpdate in the constructor the notificationCount to update
was never null and set to 0 if it was missing. Now that we are
no longer using it, I forgot to
add the null fallback at this point.
This leads to serious crashes in the apps at runtime
and thats why I bump the version here as well!
The method was not type safe and therefore there
was no warning that with the sortOrder changes
now DateTimes are compared which leads to
an exception in the app if not using converting to milliseconds first.
RoomUpdate came from a time where we had no data model for
SyncUpdates but now we have and therefore this class is just
code duplication. This removes the class
and uses the SyncRoomUpdate class from
the package matrix_api_lite instead.
It needed a lot of refactoring at some places
where I also have removed some unnecessary null or type checks.
This is an edge case which might occour
on unstable data connections. The user sends an event and receives the sync
before the response to the sending
http request. This leads to duplicated
events while the response actually
should be ignored at this point.
The current implementation of sortOrder can be made way more easier now
by just keeping the sortOrder of the list
and the timelineFragments in the hiveStore. This needed a huge
change but mostly removes a lot of code which can be done
way more easy now. This also needed some rewriting of the setState logic and changes to
the prevEvent calculation. This solution should also be more stable.
More information:
https://www.reddit.com/r/fluffychat/comments/pfnlhq/the_sort_order_of_matrix_timelines/
Normally we do not need a workaround here at all but we had
one in the displayname calculation for
historical reasons. A "good" server should always send the mHeroes correctly.
Instead of removing this workaround completely we do a compromise and implement a more
lightweight alt behaviour by just saying that in a DM room with no
heroes, the directChatMatrixId will be used. This is the same behaviour like in Element
and needs way less lines than before and also covers the avatar
calculation. For Synapse we seem to not need this but for Conduit it
might be helpful.
That way some concurrency bugs might be fixed, such as if two sync
requests are processed at the same time. That can e.g. happen if you
request history while a sync request is already being processed.
We should just let the `auth` object null and dont send it at the
first try and wait for the servers response. This worked in the past
but now it is broken because of changes in the
matrix_api_lite. This could also be fault for some
bootstrap issues.
I have also removed an unnecessary check if a String is a String and just made it a
null check because this was intended at this point.
Because of that this blocks uiaRequests it is a hotfix and therefore directly bumps the version.
As it turns out, some of the code set the prev_batch for rooms too
early to an empty string. For synapse this means "request from the start",
for conduit it is just an error. This commit fixes that by never resolving
null --> empty string, but instead throw an error.
The old mentionMap was very inefficient to build and scaled badly with
room member size. This resulted in noticable lag when sending any message
in a large room, no matter if it contained a message or not.
Now, the algorithm is severly optimized and mentions (and emotes) are
only loaded when actually used.
It appears that [hideEdit] in Event.getLocalizedBody was written in a way that it
assumes a valid event body. This was also fixed, while also adding tests for the
various parameters of Event.getLocalizedBody
Using JoinedRoomUpdate() in a fake
sync for archived rooms when requesting
the history leads to the problem that
the room is stored as a joined room
in the store which is wrong.
getRoomById searches now in the local cache for the given room and returns null if not
found. If you have loaded the [archive] before, it can also return
archived rooms.
This should make it much easier to display
archived rooms in the client.
We have used some data models which were only used in moor in the tests.
I needed to rewrite them in the original data as well.
Also now the "fake database" on native is the same like on web now with hive.
Previously only the first child node of a spoiler was considered to
determine if there should be a spoiler reason. This was, unfortunately,
incorrect, as soon as e.g. the reason had more than one space. This is
fixed by properly iterating all child nodes to search for the reason.
The isolates package is discontinued and not compatible
with the newest Dart version.
dart:isolate is not an option because importing this
library makes it impossible to run the matrix
SDK on dart web native. It just won't
build. So we now just depend on
that the flutter app pass through the compute method.
This PR adds support for nicer mentions in markdown: You can now
fetch the mention string of a user with `user.mention` which is
human-friendly (typically contains the display name), which will get
properly pillified upon passing through the markdown parser.
Due to chunked lazy sending of megolm sessions it was in theory that we encrypted two olm
messages to the same device in different futures out-of-order. Introducing locking here should
fix this (increadibly rare, so far only theoretical?) race-condition
If the well-known look fails (not json, 404, etc.) we should assume a
reasonable fallback (domain part with https prepended). As clients are
expected to call Client.checkHomeserver on the resulting domain anyways
we can safely assume this default, as it is still validated, if there
is actually a matrix homeserver running on that endpoint.
If the currentVersion of the database is null then the database has never been used yet.
Therefore we store the current version and do not call the migrate method.
With the switch to hive a regression of sending the to_device key was
introduced: When popping elements .deleteAt(), so deleting at the index,
was used, instead of .delete(), so deleting of the key. As the new events
pushed onto the queue used hives auto increment key, a .delete() is
appropriate here.
You may have missed the last valid loginState from the stream if you
listen too late to it. This makes it possible to
get always the current loginState.
If you call BasicEvent.fromJson the given content is copied first
which recursively makes sure
that the Map is from type
Map<String,dynamic>.
Using just the constructor doesnt have this which can lead that nested Maps in
the content is InternallinkedHashMap and
therefore lead to type errors.
In Moor this was implemented but forgotten in Hive.
Events with status 0 (not sent yet) should be marked as failed on restart.
In fact they should be marked as failed if older than 1 minute. To not have a big startup job which iterates through all events in the database
we just do a time check when opening a room where we iterate through all events anyway.
The new implementation is now in the constructor of the Event and therefore
independent from the database implementation.
Fixes that a sync could be done / processed while the client was still being initialized (loaded from database). This has lead to multiple bugs, such as the verified status of keys getting lost, notifications that come in during app startup displaying oddly, etc.
Additionally, the init lock was released too early; it is now released when the init is actually done.
This new sync status stream gives the current status of the sync to make it possible
to display in the UI where the sync currently hangs and
what the progress is while updating 1000 rooms. So the app can display a
progress bar.
The unawaited method from the pedantic package was a historic solution
for the case that you dont want to await a future in an async function.
But now we can do this with just a comment which
is the recommended way to do this now.
This makes it possible to have pedantic as a dev_dependency which means just one dependency less.
Currently we only migrate the client and SSSSCache but this leads to the
problem that we are no longer self signed after the migration.
We need to migrate all device keys too.
This also abstracts the migration code in a method. init() is too large already...
The hive database now implements the whole API except for storing files which
should be better done by the flutter_cache_manager package inside of the
flutter app. All tests already run with Hive now but the Moor database is still
tested too. We needed to change some wait jobs in the tests because the Hive
database is not 100% in memory for the tests like Moor.
For now both database implementations are equal and the developer can pick
which one to use but we plan to get rid of Moor in the future.
This makes sure that the database is null after clearing so it will
be built again using the databaseBuilder.
Also this makes sure that the sync has
aborted BEFORE the clearing starts to
get rid of some warnings in the logs.
Before the migration of the databases starts there is always a
logout signal sent. This was wrong.
This also cleans up the logs a little bit
and removes the useless parameters for the second init() call
because those are going to come from the new database anyway.
This allows the user to give a legacyDatabaseBuilder to the client object
and in the init proccess the client checks by itself if there is old data in the legacy
database. If yes then it migrates them and
then deletes the old database. This uses the database_api and is agnostic to
the database implementation.
This introduces a minor breaking change in the login method.
It now uses correctly the AuthenticationIdentifier
and deprecates the user, medium and address parameter.