verdnatura-chat/ios/Pods/Flipper-Folly/folly/SharedMutex.h

1761 lines
64 KiB
C
Raw Normal View History

Merge beta into master (#2143) * [FIX] Messages being sent but showing as temp status (#1469) * [FIX] Missing messages after reconnect (#1470) * [FIX] Few fixes on themes (#1477) * [I18N] Missing German translations (#1465) * Missing German translation * adding a missing space behind colon * added a missing space after colon * and another attempt to finally fix this – got confused by all the branches * some smaller fixes for the translation * better wording * fixed another typo * [FIX] Crash while displaying the attached image with http on file name (#1401) * [IMPROVEMENT] Tap app and server version to copy to clipboard (#1425) * [NEW] Reply notification (#1448) * [FIX] Incorrect background color login on iPad (#1480) * [FIX] Prevent multiple tap on send (Share Extension) (#1481) * [NEW] Image Viewer (#1479) * [DOCS] Update Readme (#1485) * [FIX] Jitsi with Hermes Enabled (#1523) * [FIX] Draft messages not working with themed Messagebox (#1525) * [FIX] Go to direct message from members list (#1519) * [FIX] Make SAML wait for idp token instead of creating it on client (#1527) * [FIX] Server Test Push Notification (#1508) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [CHORE] Update to new server response (#1509) * [FIX] Insert messages with blank users (#1529) * Bump version to 4.2.1 (#1530) * [FIX] Error when normalizing empty messages (#1532) * [REGRESSION] CAS (#1570) * Bump version to 4.2.2 (#1571) * [FIX] Add username block condition to prevent error (#1585) * Bump version to 4.2.3 * Bump version to 4.2.4 * Bump version to 4.3.0 (#1630) * [FIX] Channels doesn't load (#1586) * [FIX] Channels doesn't load * [FIX] Update roomsUpdatedAt when subscriptions.length is 0 * [FIX] Remove unnecessary changes * [FIX] Improve the code Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Make SAML to work on Rocket.Chat < 2.3.0 (#1629) * [NEW] Invite links (#1534) * [FIX] Set the http-agent to the form that Rocket.Chat requires for logging (#1482) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] "Following thread" and "Unfollowed Thread" is hardcoded and not translated (#1625) * [FIX] Disable reset button if form didn't changed (#1569) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Header title of RoomInfoView (#1553) * [I18N] Gallery Permissions DE (#1542) * [FIX] Not allow to send messages to archived room (#1623) * [FIX] Profile fields automatically reset (#1502) * [FIX] Show attachment on ThreadMessagesView (#1493) * [NEW] Wordpress auth (#1633) * [CHORE] Add Start Packager script (#1639) * [CHORE] Update RN to 0.61.5 (#1638) * [CHORE] Update RN to 0.61.5 * [CHORE] Update react-native patch Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * Bump version to 4.3.1 (#1641) * [FIX] Change force logout rule (#1640) * Bump version to 4.4.0 (#1643) * [IMPROVEMENT] Use MessagingStyle on Android Notification (#1575) * [NEW] Request review (#1627) * [NEW] Pull to refresh RoomView (#1657) * [FIX] Unsubscribe from room (#1655) * [FIX] Server with subdirs (#1646) * [NEW] Clear cache (#1660) * [IMPROVEMENT] Memoize and batch subscriptions updates (#1642) * [FIX] Disallow empty sharing (#1664) * [REGRESSION] Use HTTPS links for sharing and markets protocol for review (#1663) * [FIX] In some cases, share extension doesn't load images (#1649) * [i18n] DE translations for new invite function and some minor fixes (#1631) * [FIX] Remove duplicate jetify step (#1628) minor: also remove 'cd' calls Co-authored-by: Diego Mello <diegolmello@gmail.com> * [REGRESSION] Read messages (#1666) * [i18n] German translations missing (#1670) * [FIX] Notifications crash on older Android Versions (#1672) * [i18n] Added Dutch translation (#1676) * [NEW] Omnichannel Beta (#1674) * [NEW] Confirm logout/clear cache (#1688) * [I18N] Add es-ES language (#1495) * [NEW] UiKit Beta (#1497) * [IMPROVEMENT] Use reselect (#1696) * [FIX] Notification in Android API level less than 24 (#1692) * [IMPROVEMENT] Send tmid on slash commands and media (#1698) * [FIX] Unhandled action on UIKit (#1703) * [NEW] Pull to refresh RoomsList (#1701) * [IMPROVEMENT] Reset app when language is changed (#1702) * [FIX] Small fixes on UIKit (#1709) * [FIX] Spotlight (#1719) * [CHORE] Update react-native-image-crop-picker (#1712) * [FIX] Messages Overlapping (Android) and MessageBox Scroll (iOS) (#1720) * [REGRESSION] Remove @ and # from mention (#1721) * [NEW] Direct message from user info (#1516) * [FIX] Delete slash commands (#1723) * [IMPROVEMENT] Hold URL to copy (#1684) * [FIX] Different sourcemaps generation for Hermes (#1724) * [FIX] Different sourcemaps generation for Hermes * Upload sourcemaps after build * [REVERT] Show emoji keyboard on Android (#1738) * [FIX] Stop logging react-native-image-crop-picker (#1745) * [FIX] Prevent toast ref error (#1744) * [FIX] Prevent reaction map error (#1743) * [FIX] Add missing calls to user info (#1741) * [FIX] Catch room unsubscribe error (#1739) * [i18n] Missing German keys (#1735) * [FIX] Missing i18n on MessagesView title (#1733) * [FIX] UIKit Modal: Weird behavior on Android Tablet (#1742) * [i18n] Missing key on German (#1747) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Add Italian (#1736) * [CHORE] Memory leaks investigation (#1675) * [IMPROVEMENT] Alert verify email when enabled (#1725) * [NEW] Jitsi JWT added to URL (#1746) * [FIX] UIKit submit when connection lost (#1748) * Bump version to 4.5.0 (#1761) * [NEW] Default browser (#1752) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] HTTP Basic Auth (#1753) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Honor profile fields edit settings (#1687) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Room announcements (#1726) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Honor Register/Login settings (#1727) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Make links clickable on Room Info (#1730) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Hide system messages (#1755) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Honor "Message_AudioRecorderEnabled" (#1764) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Missing de keys (#1765) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Redirect user to SetUsernameView (#1728) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Join Room (#1769) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Accept all media types using * (#1770) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Use RealName when necessary (#1758) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Markdown Line Break (#1783) * [IMPROVEMENT] Remove useMarkdown (#1774) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Open browser rather than webview on Create Workspace (#1788) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Markdown perf (#1796) * [FIX] Stop video when modal is closed (#1787) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Hide reply notification action when there are missing data (#1771) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Added Japanese translation (#1781) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Reset password error message (#1772) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Close tablet modal (#1773) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Setting not present (#1775) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Thread header (#1776) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Keyboard tracking loses input ref (#1784) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Mark message as unread (#1785) Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [IMPROVEMENT] Log server version (#1786) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Add loading message on long running tasks (#1798) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [CHORE] Switch Apple account on Fastlane (#1810) * [FIX] Watermelon throwing "Cannot update a record with pending updates" (#1754) * [FIX] Detox tests (#1790) * [CHORE] Use markdown preview on RoomView Header (#1807) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] LoginSignup blink services (#1809) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Request user presence on demand (#1813) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Remove all invited users when create a channel (#1814) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Pop from room which you have been removed (#1819) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Room Info styles (#1820) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Add missing German keys (#1800) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Empty mentions for @all and @here when real name is enabled (#1822) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [TESTS] Markdown added to Storybook (#1812) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [REGRESSION] Room View header title (#1827) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Storybook snapshots (#1831) Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [FIX] Mentions (#1829) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Thread message not found (#1830) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Separate delete and remove channel (#1832) * Rename to delete room * Separate delete and remove channel * handleRemoved -> handleRoomRemoved * [FIX] Navigate to RoomsList & Handle tablet case Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [NEW] Filter system messages per room (#1815) Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] e2e tests (#1838) * [FIX] Consecutive clear cache calls freezing app (#1851) * Bump version to 4.5.1 (#1853) * [FIX][iOS] Ignore silent mode on audio player (#1862) * [IMPROVEMENT] Create App Group property on Info.plist (#1858) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Make username clickable on message (#1618) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Show proper error message on profile (#1768) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Show toast when a message is starred/unstarred (#1616) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Incorrect size params to avatar endpoint (#1875) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Remove unrecognized emoji flags on android (#1887) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Remove react-native global installs (#1886) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Emojis transparent on android (#1881) Co-authored-by: Diego Mello <diegolmello@gmail.com> * Bump acorn from 5.7.3 to 5.7.4 (#1876) Bumps [acorn](https://github.com/acornjs/acorn) from 5.7.3 to 5.7.4. - [Release notes](https://github.com/acornjs/acorn/releases) - [Commits](https://github.com/acornjs/acorn/compare/5.7.3...5.7.4) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * Bump version to 4.6.0 (#1911) * [FIX] Encode Image URI (#1909) * [FIX] Encode Image URI * [FIX] Check if Image is Valid Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Adaptive Icons (#1904) * Remove unnecessary stuff from debug build * Adaptive icon for experimental app * [FIX] Stop showing message on leave channel (#1896) * [FIX] Leave room don't show 'was removed' message * [FIX] Remove duplicated code Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Added missing German translations(#1900) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Linkedin OAuth login (#1913) * [CHORE] Fix typo in CreateChannel View (#1930) * [FIX] Respect protocol in HTTP Auth IPs (#1933) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Use new LinkedIn OAuth url (#1935) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [CHORE] Use storyboard on splash screen (#1939) * Update react-native-bootsplash * iOS * Fix android * [FIX] Check if avatar exists before create Icon (#1927) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Ignore self typing event (#1950) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Change default directory listing to Users (#1948) * fix: change default directory listing to Users * follow server settings * Fix state to props Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Onboarding layout (#1954) * Onboarding texts * OnboardingView * FormContainer * Minor fixes * NewServerView * Remove code * Refactor * WorkspaceView * Stash * Login with email working * Login with * Join open * Revert "Login with" This reverts commit d05dc507d2e9a2db76d433b9b1f62192eba35dbd. * Fix create account styles * Register * Refactor * LoginServices component * Refactor * Multiple servers * Remove native images * Refactor styles * Fix testid * Fix add server on tablet * i18n * Fix close modal * Fix TOTP * [FIX] Registration disabled * [FIX] Login Services separator * Fix logos * Fix AppVersion name * I18n * Minor fixes * [FIX] Custom Fields Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [NEW] Create discussions (#1942) * [WIP][NEW] Create Discussion * [FIX] Clear multiselect & Translations * [NEW] Create Discussion at MessageActions * [NEW] Disabled Multiselect * [FIX] Initial channel * [NEW] Create discussion on MessageBox Actions * [FIX] Crashing on edit name * [IMPROVEMENT] New message layout * [CHORE] Update README * [NEW] Avatars on MultiSelect * [FIX] Select Users * [FIX] Add redirect and Handle tablet * [IMPROVEMENT] Split CreateDiscussionView * [FIX] Create a discussion inner discussion * [FIX] Create a discussion * [I18N] Add pt-br * Change icons * [FIX] Nav to discussion & header title * Fix header Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Load messages (#1910) * Create updateLastOpen param on readMessages * Remove InteractionManager from load messages * [NEW] Custom Status (#1811) * [NEW] Custom Status * [FIX] Subscribe to changes * [FIX] Improve code using Banner component * [IMPROVEMENT] Toggle modal * [NEW] Edit custom status from Sidebar * [FIX] Modal when tablet * [FIX] Styles * [FIX] Switch to react-native-promp-android * [FIX] Custom Status UI * [TESTS] E2E Custom Status * Fix banner * Fix banner * Fix subtitle * status text * Fix topic header * Fix RoomActionsView topic * Fix header alignment on Android * [FIX] RoomInfo crashes when without statusText * [FIX] Use users.setStatus * [FIX] Remove customStatus of ProfileView * [FIX] Room View Thread Header Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] UI issues of Create Discussion View (#1965) * [NEW] Direct Message between multiple users (#1958) * [WIP] DM between multiple users * [WIP][NEW] Create new DM between multiple users * [IMPROVEMENT] Improve createChannel Sagas * [IMPROVEMENT] Selected Users view * [IMPROVEMENT] Room Actions of Group DM * [NEW] Create new DM between multiple users * [NEW] Group DM avatar * [FIX] Directory border * [IMPROVEMENT] Use isGroupChat * [CHORE] Remove legacy getRoomMemberId * [NEW] RoomTypeIcon * [FIX] No use legacy method on RoomInfoView * [FIX] Blink header when create new DM * [FIX] Only show create direct message option when allowed * [FIX] RoomInfoView * pt-BR * Few fixes * Create button name * Show create button only after a user is selected * Fix max users issues Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Add server and hide login (#1968) * Navigate to new server workspace from ServerDropdown if there's no token * Hide login button based on login services and Accounts_ShowFormLogin setting * [FIX] Lint Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [FIX] MultiSelect Keyboard behavior (Android) (#1969) * fixed-modal-position * made-changes Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [FIX] Bottom border style on DirectoryView (#1963) * [FIX] Border style * [FIX] Refactoring * [FIX] fix color of border * Undo Co-authored-by: Aroo <azhaubassar@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Clear settings on server change (#1967) * [FIX] Deeplinking without RoomId (#1925) * [FIX] Deeplinking without rid * [FIX] Join channel * [FIX] Deep linking without rid * Update app/lib/methods/canOpenRoom.js Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Two Factor authentication via email (#1961) * First api call working * [NEW] REST API Post wrapper 2FA * [NEW] Send 2FA on Email * [I18n] Add translations * [NEW] Translations & Cancel totp * [CHORE] Totp -> TwoFactor * [NEW] Two Factor by email * [NEW] Tablet Support * [FIX] Text colors * [NEW] Password 2fa * [FIX] Encrypt password on 2FA * [NEW] MethodCall2FA * [FIX] Password fallback * [FIX] Wrap all post/methodCall with 2fa * [FIX] Wrap missed function * few fixes * [FIX] Use new TOTP on Login * [improvement] 2fa methodCall Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [FIX] Correct message for manual approval user Registration (#1906) * [FIX] Correct message for manual approval from admin shown on Registeration * lint fix - added semicolon * Updated the translations * [FIX] Translations * i18n to match server Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Direct Message between multiple users REST (#1974) * [FIX] Investigate app losing connection issues (#1890) * [WIP] Reopen without timeOut & ping with 5 sec & Fix Unsubscribe * [FIX] Remove duplicated close * [FIX] Use no-dist lib * [FIX] Try minor fix * [FIX] Try reopen connection when app was put on foreground * [FIX] Remove timeout * [FIX] Build * [FIX] Patch * [FIX] Snapshot * [IMPROVEMENT] Decrease time to reopen * [FIX] Some fixes * [FIX] Update sdk version * [FIX] Subscribe Room Once * [CHORE] Update sdk * [FIX] Subscribe Room * [FIX] Try to resend missed subs * [FIX] Users never show status when start app without network * [FIX] Subscribe to room * [FIX] Multiple servers * [CHORE] Update SDK * [FIX] Don't duplicate streams on subscribeAll * [FIX] Server version when start the app offline * [FIX] Server version cached * [CHORE] Remove unnecessary code * [FIX] Offline server version * [FIX] Subscribe before connect * [FIX] Remove unncessary props * [FIX] Update sdk * [FIX] User status & Unsubscribe Typing * [FIX] Typing at incorrect room * [FIX] Multiple Servers * [CHORE] Update SDK * [REVERT] Undo some changes on SDK * [CHORE] Update sdk to prevent incorrect subscribes * [FIX] Prevent no reconnect * [FIX] Remove close on open * [FIX] Clear typing when disconnect/connect to SDK * [CHORE] Update SDK * [CHORE] Update SDK * Update SDK * fix merge develop Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Single message thread inserting thread without rid (#1999) * [FIX] ThreadMessagesView crashing on load (#1997) * [FIX] Saml (#1996) * [FIX] SAML incorrect close * [FIX] Pathname Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Change user own status (#1995) * [FIX] Change user own status * [IMPROVEMENT] Set activeUsers Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Loading all updated rooms after app resume (#1998) * [FIX] Loading all updated rooms after app resume * Fix room date on RoomItem Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Change notifications preferences (#2000) * [FIX] Change notifications preferences * [IMPROVEMENT] Picker View * [I18N] Translations * [FIX] Picker Selection * [FIX] List border * [FIX] Prevent crash * [FIX] Not-Pref tablet * [FIX] Use same style of LanguageView * [IMPROVEMENT] Send listItem title Co-authored-by: Diego Mello <diegolmello@gmail.com> * Bump version to 4.6.1 (#2001) * [FIX] DM header blink (#2011) * [FIX] Split get settings into two requests (#2017) * [FIX] Split get settings into two requests * [FIX] Clear settings only when change server * [IMPROVEMENT] Move the way to clear settings * [REVERT] Revert some changes * [FIX] Server Icon Co-authored-by: Diego Mello <diegolmello@gmail.com> * [REGRESSION] Invite Links (#2007) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Read only channel/broadcast (#1951) * [FIX] Read only channel/broadcast * [FIX] Roles missing * [FIX] Check roles to readOnly * [FIX] Can post * [FIX] Respect post-readonly permission * [FIX] Search a room readOnly Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Cas auth (#2024) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Login TOTP Compatibility to older servers (#2018) * [FIX] Login TOTP Compatibility to older servers * [FIX] Android crashes if use double negation Co-authored-by: Diego Mello <diegolmello@gmail.com> * Bump version to 4.6.4 (#2029) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Lint (#2030) * [FIX] UIKit with only one block (#2022) * [FIX] Message with only one block * [FIX] Update headers Co-authored-by: Diego Mello <diegolmello@gmail.com> * Bump version to 4.7.0 (#2035) * [FIX] Action Tint Color on Black theme (#2081) * [FIX] Prevent crash when thread is not found (#2080) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Prevent double click (#2079) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Show slash commands when disconnected (#2078) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Backhandler onboarding (#2077) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Respect UI_Allow_room_names_with_special_chars setting (#2076) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] RoomsList update sometimes isn't fired (#2071) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Stop inserting last message as message object from rooms stream if room is focused (#2069) * [IMPROVEMENT] No insert last message if the room is focused * fix discussion/threads Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Hide system messages (#2067) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Pending update (#2066) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Prevent crash when room.uids was not inserted yet (#2055) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FEATURE] Save video (#2063) * added-feature-save-video * fix sha256 Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Send totp-code to meteor call (#2050) * fixed-issue * removed-variable-name-errors * reverted-last-commit Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] MessageBox mention shouldn't show group DMs (#2049) * fixed-issue * [FIX] Filter users only if it's not a group chat Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] AttachmentView (Android)(Tablet) (#2047) * [fix]Tablet attachment View and Room Navigation * fix weird navigation and margin bottom Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Allow special chars in Filename (#2020) * fixed-filename-issue * improve Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Recorded audio on Android doesn't play on iOS (#2073) * react-native-video -> expo-av * remove react-native-video * Add audio mode * update mocks * [FIX] Loading bigger than play/pause Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Message Touchable (#2082) * [FIX] Avatar touchable * [IMPROVEMENT] onLongPress on all Message Touchables * [IMPROVEMENT] User & baseUrl on MessageContext * [FIX] Context Access * [FIX] BaseURL * Fix User Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] ReactionsModal (#2085) * [NEW] Delete Server (#1975) * [NEW] Delete server Co-authored-by: Bruno Dantas <oliveiradantas96@gmail.com> Co-authored-by: Calebe Rios <calebersmendes@gmail.com> * [FIX] Revert removed function Co-authored-by: Bruno Dantas <oliveiradantas96@gmail.com> Co-authored-by: Calebe Rios <calebersmendes@gmail.com> * pods * i18n * Revert "pods" This reverts commit 2854a1650538159aeeafe90fdb2118d12b76a82f. Co-authored-by: Bruno Dantas <oliveiradantas96@gmail.com> Co-authored-by: Calebe Rios <calebersmendes@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Change server while connecting/updating (#1981) * [IMPROVEMENT] Change server while connecting * [FIX] Not login/reconnect to previous server * [FIX] Abort all fetch while connecting * [FIX] Abort sdk fetch * [FIX] Patch-package * Add comments Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Keep screen awake while recording/playing some audio (#2089) * [IMPROVEMENT] Keep screen awake while recording/playing some audio * [FIX] Add expo-keep-awake mock * [FIX] UIKit crashing when UIKitModal receive update event (#2088) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Close announcement banner (#2064) * [NEW] Created new field in subscription table Signed-off-by: Ezequiel De Oliveira <ezequiel1de1oliveira@gmail.com> * [NEW] New field added to obeserver in room view Signed-off-by: Ezequiel De Oliveira <ezequiel1de1oliveira@gmail.com> * [NEW] Added icon and new design to banner Signed-off-by: Ezequiel De Oliveira <ezequiel1de1oliveira@gmail.com> * [NEW] Close banner function works Signed-off-by: Ezequiel De Oliveira <ezequiel1de1oliveira@gmail.com> * [IMPROVEMENT] closed banner status now update correctly Signed-off-by: Ezequiel De Oliveira <ezequiel1de1oliveira@gmail.com> * improve banner style Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * Update all dependencies (#2008) * Android RN 62 * First steps iOS * Second step iOS * iOS compiling * "New" build system * Finish iOS * Flipper * Update to RN 0.62.1 * expo libs * Hermes working * Fix lint * Fix android build * Patches * Dev patches * Patch WatermelonDB: https://github.com/Nozbe/WatermelonDB/pull/660 * Fix jitsi * Update several minors * Update dev minors and lint * react-native-keyboard-input * Few updates * device info * react-native-fast-image * Navigation bar color * react-native-picker-select * webview * reactotron-react-native * Watermelondb * RN 0.62.2 * Few updates * Fix selection * update gems * remove lib * finishing * tests * Use node 10 * Re-enable app bundle * iOS build * Update jitsi ios * [NEW] Passcode and biometric unlock (#2059) * Update expo libs * Configure expo-local-authentication * ScreenLockedView * Authenticate server change * Auth on app resume * localAuthentication util * Add servers.lastLocalAuthenticatedSession column * Save last session date on background * Use our own version of app state redux * Fix libs * Remove inactive * ScreenLockConfigView * Apply on saved data * Auto lock option label * Starting passcode * Basic passcode flow working * Change passcode * Check if biometry is enrolled * Use fork * Migration * Patch expo-local-authentication * Use async storage * Styling * Timer * Refactor * Lock orientation portrait when not on tablet * share extension * Deep linking * Share extension * Refactoring passcode * use state * Stash * Refactor * Change passcode * Animate dots on error * Matching passcodes * Shake * Remove lib * Delete button * Fade animation on modal * Refactoring * ItemInfo * I18n * I18n * Remove unnecessary prop * Save biometry column * Raise time to lock to 30 seconds * Vibrate on wrong confirmation passcode * Reset attempts and save last authentication on local passcode confirmation * Remove inline style * Save last auth * Fix header blink * Change function name * Fix android modal * Fix vibration permission * PasscodeEnter calls biometry * Passcode on the state * Biometry button on PasscodeEnter * Show whole passcode * Secure passcode * Save passcode with promise to prevent empty passcodes and immediately lock * Patch expo-local-authentication * I18n * Fix biometry being called every time * Blur screen on app inactive * Revert "Blur screen on app inactive" This reverts commit a4ce812934adcf6cf87eb1a92aec9283e2f26753. * Remove immediately because of how Activities work on Android * Pods * New layout * stash * Layout refactored * Fix icons * Force set passcode from server * Lint * Improve permission message * Forced passcode subtitle * Disable based on admin's choice * Require local authentication on login success * Refactor * Update tests * Update react-native-device-info to fix notch * Lint * Fix modal * Fix icons * Fix min auto lock time * Review * keep enabled on mobile * fix forced by admin when enable unlock with passcode * use DEFAULT_AUTO_LOCK when manual enable screenLock * fix check has passcode * request biometry on first password * reset auto time lock when disabled on server Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> * [FIX] Messages View (#2090) * [FIX] Messages View * [FIX] Opening PDF from Files View * [FIX] Audio * [FIX] SearchMessagesView Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Big names overflow (#2072) * [FIX] Big names overflow * [FIX] Message time Co-authored-by: devyaniChoubey <devyanichoubey16@gmail.com> * [FIX] Some alignments * fix user item overflow * some adjustments Co-authored-by: devyaniChoubey <devyanichoubey16@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Avatar of message as an emoji (#2038) * fixed-issue * removed-hardcoded-emoji * Merge develop * replaced markdown with emoji componenent * made-changes * use avatar onPress Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Diego Mello <diegolmello@gmail.com> * [NEW] Livechat (#2004) * [WIP][NEW] Livechat info/actions * [IMPROVEMENT] RoomActionsView * [NEW] Visitor Navigation * [NEW] Get Department REST * [FIX] Borders * [IMPROVEMENT] Refactor RoomInfo View * [FIX] Error while navigate from mention -> roomInfo * [NEW] Livechat Fields * [NEW] Close Livechat * [WIP] Forward livechat * [NEW] Return inquiry * [WIP] Comment when close livechat * [WIP] Improve roomInfo * [IMPROVEMENT] Forward room * [FIX] Department picker * [FIX] Picker without results * [FIX] Superfluous argument * [FIX] Check permissions on RoomActionsView * [FIX] Livechat permissions * [WIP] Show edit to livechat * [I18N] Add pt-br translations * [WIP] Livechat Info * [IMPROVEMENT] Livechat info * [WIP] Livechat Edit * [WIP] Livechat edit * [WIP] Livechat Edit * [WIP] Livechat edit scroll * [FIX] Edit customFields * [FIX] Clean livechat customField * [FIX] Visitor Navigation * [NEW] Next input logic LivechatEdit * [FIX] Add livechat data to subscription * [FIX] Revert change * [NEW] Livechat user Status * [WIP] Livechat tags * [NEW] Edit livechat tags * [FIX] Prevent some crashes * [FIX] Forward * [FIX] Return Livechat error * [FIX] Prevent livechat info crash * [IMPROVEMENT] Use input style on forward chat * OnboardingSeparator -> OrSeparator * [FIX] Go to next input * [NEW] Added some icons * [NEW] Livechat close * [NEW] Forward Room Action * [FIX] Livechat edit style * [FIX] Change status logic * [CHORE] Remove unnecessary logic * [CHORE] Remove unnecessary code * [CHORE] Remove unecessary case * [FIX] Superfluous argument * [IMPROVEMENT] Submit livechat edit * [CHORE] Remove textInput type * [FIX] Livechat edit * [FIX] Livechat Edit * [FIX] Use same effect * [IMPROVEMENT] Tags input * [FIX] Add empty tag * Fix minor issues * Fix typo * insert livechat room data to our room object * review * add method calls server version Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Delete Subs (#2091) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Android build (#2094) * [FIX] Blink header DM (#2093) * [FIX] Blink header DM * Remove query * [FIX] Push RoomInfoView * remove unnecessary try/catch * [FIX] RoomInfo > Message (Tablet) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Default biometry enabled (#2095) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [IMPROVEMENT] Enable navigating to a room from auth deep linking (#2115) * Wait for login success to navigate * Enable auth and room deep linking at the same time * [FIX] NewMessageView Press Item should open DM (#2116) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Roles throwing error (#2110) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Wait attach activity before changeNavigationBarColor (#2111) * [FIX] Wait attach activity before changeNavigationBarColor * Remove timeout and add try/catch Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] UIKit crash when some app send a list (#2117) * [FIX] StoryBook * [FIX] UIKit crash when some app send a list * [CHORE] Update snapshot * [CHORE] Remove token & id * [FIX] Change bar color while no activity attached (#2130) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Screen Lock options i18n (#2120) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [i18n] Added missing German translation strings (#2105) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Sometimes SDK is null when try to connect (#2131) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [FIX] Autocomplete position on Android (#2106) * [FIX] Autocomplete position on Android * [FIX] Set selection to 0 when needed Co-authored-by: Diego Mello <diegolmello@gmail.com> * Revert "[FIX] Autocomplete position on Android (#2106)" (#2136) This reverts commit e8c38d6f6f69ae396a4aae6e37336617da739a6d. * [FIX] Here and all mentions shouldn't refer to users (#2137) * [FIX] No send data to bugsnag if it's an aborted request (#2133) Co-authored-by: Diego Mello <diegolmello@gmail.com> * [TESTS] Update and separate E2E tests (#2126) * Tests passing until roomslist * create room * roominfo * change server * broadcast * profile * custom status * forgot password * working * room and onboarding * Tests separated * config.yml refactor * Revert "config.yml refactor" This reverts commit 0e984d3029e47612726bf199553f7abdf24843e5. * CI * lint * CI refactor * Onboarding tests * npx detox * Add all tests * Save brew cache * mac-env executor * detox-test command * Update readme * Remove folder * [FIX] Screen Lock Time respect local value (#2141) * [FIX] Screen Lock Time respect local value * [FIX] Enable biometry at the first passcode change Co-authored-by: phriedrich <info@phriedrich.de> Co-authored-by: Guilherme Siqueira <guilhersiqueira@gmail.com> Co-authored-by: Prateek Jain <44807945+Prateek93a@users.noreply.github.com> Co-authored-by: Djorkaeff Alexandre <djorkaeff.unb@gmail.com> Co-authored-by: Prateek Jain <prateek93a@gmail.com> Co-authored-by: devyaniChoubey <52153085+devyaniChoubey@users.noreply.github.com> Co-authored-by: Bernard Seow <ssbing99@gmail.com> Co-authored-by: Hiroki Ishiura <ishiura@ja2.so-net.ne.jp> Co-authored-by: Exordian <jakob.englisch@gmail.com> Co-authored-by: Daanchaam <daanhendriks97@gmail.com> Co-authored-by: Youssef Muhamad <emaildeyoussefmuhamad@gmail.com> Co-authored-by: Iván Álvarez <ialvarezpereira@gmail.com> Co-authored-by: Sarthak Pranesh <41206172+sarthakpranesh@users.noreply.github.com> Co-authored-by: Michele Pellegrini <pellettiero@users.noreply.github.com> Co-authored-by: Tanmoy Bhowmik <tanmoy.openroot@gmail.com> Co-authored-by: Hibikine Kage <14365761+hibikine@users.noreply.github.com> Co-authored-by: Ezequiel de Oliveira <ezequiel1de1oliveira@gmail.com> Co-authored-by: Neil Agarwal <neil@neilagarwal.me> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Govind Dixit <GOVINDDIXIT93@GMAIL.COM> Co-authored-by: Zhaubassarova Aruzhan <49000079+azhaubassar@users.noreply.github.com> Co-authored-by: Aroo <azhaubassar@gmail.com> Co-authored-by: Sarthak Pranesh <sarthak.pranesh2018@vitstudent.ac.in> Co-authored-by: Siddharth Padhi <padhisiddharth31@gmail.com> Co-authored-by: Bruno Dantas <oliveiradantas96@gmail.com> Co-authored-by: Calebe Rios <calebersmendes@gmail.com> Co-authored-by: devyaniChoubey <devyanichoubey16@gmail.com>
2020-05-25 20:54:27 +00:00
/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// @author Nathan Bronson (ngbronson@fb.com)
#pragma once
#include <stdint.h>
#include <atomic>
#include <thread>
#include <type_traits>
#include <folly/CPortability.h>
#include <folly/Likely.h>
#include <folly/concurrency/CacheLocality.h>
#include <folly/detail/Futex.h>
#include <folly/portability/Asm.h>
#include <folly/portability/SysResource.h>
#include <folly/synchronization/AtomicRef.h>
#include <folly/synchronization/SanitizeThread.h>
// SharedMutex is a reader-writer lock. It is small, very fast, scalable
// on multi-core, and suitable for use when readers or writers may block.
// Unlike most other reader-writer locks, its throughput with concurrent
// readers scales linearly; it is able to acquire and release the lock
// in shared mode without cache line ping-ponging. It is suitable for
// a wide range of lock hold times because it starts with spinning,
// proceeds to using sched_yield with a preemption heuristic, and then
// waits using futex and precise wakeups.
//
// SharedMutex provides all of the methods of folly::RWSpinLock,
// boost::shared_mutex, boost::upgrade_mutex, and C++14's
// std::shared_timed_mutex. All operations that can block are available
// in try, try-for, and try-until (system_clock or steady_clock) versions.
//
// SharedMutexReadPriority gives priority to readers,
// SharedMutexWritePriority gives priority to writers. SharedMutex is an
// alias for SharedMutexWritePriority, because writer starvation is more
// likely than reader starvation for the read-heavy workloads targeted
// by SharedMutex.
//
// In my tests SharedMutex is as good or better than the other
// reader-writer locks in use at Facebook for almost all use cases,
// sometimes by a wide margin. (If it is rare that there are actually
// concurrent readers then RWSpinLock can be a few nanoseconds faster.)
// I compared it to folly::RWSpinLock, folly::RWTicketSpinLock64,
// boost::shared_mutex, pthread_rwlock_t, and a RWLock that internally uses
// spinlocks to guard state and pthread_mutex_t+pthread_cond_t to block.
// (Thrift's ReadWriteMutex is based underneath on pthread_rwlock_t.)
// It is generally as good or better than the rest when evaluating size,
// speed, scalability, or latency outliers. In the corner cases where
// it is not the fastest (such as single-threaded use or heavy write
// contention) it is never very much worse than the best. See the bottom
// of folly/test/SharedMutexTest.cpp for lots of microbenchmark results.
//
// Comparison to folly::RWSpinLock:
//
// * SharedMutex is faster than RWSpinLock when there are actually
// concurrent read accesses (sometimes much faster), and ~5 nanoseconds
// slower when there is not actually any contention. SharedMutex is
// faster in every (benchmarked) scenario where the shared mode of
// the lock is actually useful.
//
// * Concurrent shared access to SharedMutex scales linearly, while total
// RWSpinLock throughput drops as more threads try to access the lock
// in shared mode. Under very heavy read contention SharedMutex can
// be two orders of magnitude faster than RWSpinLock (or any reader
// writer lock that doesn't use striping or deferral).
//
// * SharedMutex can safely protect blocking calls, because after an
// initial period of spinning it waits using futex().
//
// * RWSpinLock prioritizes readers, SharedMutex has both reader- and
// writer-priority variants, but defaults to write priority.
//
// * RWSpinLock's upgradeable mode blocks new readers, while SharedMutex's
// doesn't. Both semantics are reasonable. The boost documentation
// doesn't explicitly talk about this behavior (except by omitting
// any statement that those lock modes conflict), but the boost
// implementations do allow new readers while the upgradeable mode
// is held. See https://github.com/boostorg/thread/blob/master/
// include/boost/thread/pthread/shared_mutex.hpp
//
// * RWSpinLock::UpgradedHolder maps to SharedMutex::UpgradeHolder
// (UpgradeableHolder would be even more pedantically correct).
// SharedMutex's holders have fewer methods (no reset) and are less
// tolerant (promotion and downgrade crash if the donor doesn't own
// the lock, and you must use the default constructor rather than
// passing a nullptr to the pointer constructor).
//
// Both SharedMutex and RWSpinLock provide "exclusive", "upgrade",
// and "shared" modes. At all times num_threads_holding_exclusive +
// num_threads_holding_upgrade <= 1, and num_threads_holding_exclusive ==
// 0 || num_threads_holding_shared == 0. RWSpinLock has the additional
// constraint that num_threads_holding_shared cannot increase while
// num_threads_holding_upgrade is non-zero.
//
// Comparison to the internal RWLock:
//
// * SharedMutex doesn't allow a maximum reader count to be configured,
// so it can't be used as a semaphore in the same way as RWLock.
//
// * SharedMutex is 4 bytes, RWLock is 256.
//
// * SharedMutex is as fast or faster than RWLock in all of my
// microbenchmarks, and has positive rather than negative scalability.
//
// * RWLock and SharedMutex are both writer priority locks.
//
// * SharedMutex avoids latency outliers as well as RWLock.
//
// * SharedMutex uses different names (t != 0 below):
//
// RWLock::lock(0) => SharedMutex::lock()
//
// RWLock::lock(t) => SharedMutex::try_lock_for(milliseconds(t))
//
// RWLock::tryLock() => SharedMutex::try_lock()
//
// RWLock::unlock() => SharedMutex::unlock()
//
// RWLock::enter(0) => SharedMutex::lock_shared()
//
// RWLock::enter(t) =>
// SharedMutex::try_lock_shared_for(milliseconds(t))
//
// RWLock::tryEnter() => SharedMutex::try_lock_shared()
//
// RWLock::leave() => SharedMutex::unlock_shared()
//
// * RWLock allows the reader count to be adjusted by a value other
// than 1 during enter() or leave(). SharedMutex doesn't currently
// implement this feature.
//
// * RWLock's methods are marked const, SharedMutex's aren't.
//
// Reader-writer locks have the potential to allow concurrent access
// to shared read-mostly data, but in practice they often provide no
// improvement over a mutex. The problem is the cache coherence protocol
// of modern CPUs. Coherence is provided by making sure that when a cache
// line is written it is present in only one core's cache. Since a memory
// write is required to acquire a reader-writer lock in shared mode, the
// cache line holding the lock is invalidated in all of the other caches.
// This leads to cache misses when another thread wants to acquire or
// release the lock concurrently. When the RWLock is colocated with the
// data it protects (common), cache misses can also continue occur when
// a thread that already holds the lock tries to read the protected data.
//
// Ideally, a reader-writer lock would allow multiple cores to acquire
// and release the lock in shared mode without incurring any cache misses.
// This requires that each core records its shared access in a cache line
// that isn't read or written by other read-locking cores. (Writers will
// have to check all of the cache lines.) Typical server hardware when
// this comment was written has 16 L1 caches and cache lines of 64 bytes,
// so a lock striped over all L1 caches would occupy a prohibitive 1024
// bytes. Nothing says that we need a separate set of per-core memory
// locations for each lock, however. Each SharedMutex instance is only
// 4 bytes, but all locks together share a 2K area in which they make a
// core-local record of lock acquisitions.
//
// SharedMutex's strategy of using a shared set of core-local stripes has
// a potential downside, because it means that acquisition of any lock in
// write mode can conflict with acquisition of any lock in shared mode.
// If a lock instance doesn't actually experience concurrency then this
// downside will outweight the upside of improved scalability for readers.
// To avoid this problem we dynamically detect concurrent accesses to
// SharedMutex, and don't start using the deferred mode unless we actually
// observe concurrency. See kNumSharedToStartDeferring.
//
// It is explicitly allowed to call unlock_shared() from a different
// thread than lock_shared(), so long as they are properly paired.
// unlock_shared() needs to find the location at which lock_shared()
// recorded the lock, which might be in the lock itself or in any of
// the shared slots. If you can conveniently pass state from lock
// acquisition to release then the fastest mechanism is to std::move
// the SharedMutex::ReadHolder instance or an SharedMutex::Token (using
// lock_shared(Token&) and unlock_shared(Token&)). The guard or token
// will tell unlock_shared where in deferredReaders[] to look for the
// deferred lock. The Token-less version of unlock_shared() works in all
// cases, but is optimized for the common (no inter-thread handoff) case.
//
// In both read- and write-priority mode, a waiting lock() (exclusive mode)
// only blocks readers after it has waited for an active upgrade lock to be
// released; until the upgrade lock is released (or upgraded or downgraded)
// readers will still be able to enter. Preferences about lock acquisition
// are not guaranteed to be enforced perfectly (even if they were, there
// is theoretically the chance that a thread could be arbitrarily suspended
// between calling lock() and SharedMutex code actually getting executed).
//
// try_*_for methods always try at least once, even if the duration
// is zero or negative. The duration type must be compatible with
// std::chrono::steady_clock. try_*_until methods also always try at
// least once. std::chrono::system_clock and std::chrono::steady_clock
// are supported.
//
// If you have observed by profiling that your SharedMutex-s are getting
// cache misses on deferredReaders[] due to another SharedMutex user, then
// you can use the tag type to create your own instantiation of the type.
// The contention threshold (see kNumSharedToStartDeferring) should make
// this unnecessary in all but the most extreme cases. Make sure to check
// that the increased icache and dcache footprint of the tagged result is
// worth it.
// SharedMutex's use of thread local storage is an optimization, so
// for the case where thread local storage is not supported, define it
// away.
// Note about TSAN (ThreadSanitizer): the SharedMutexWritePriority version
// (the default) of this mutex is annotated appropriately so that TSAN can
// perform lock inversion analysis. However, the SharedMutexReadPriority version
// is not annotated. This is because TSAN's lock order heuristic
// assumes that two calls to lock_shared must be ordered, which leads
// to too many false positives for the reader-priority case.
//
// Suppose thread A holds a SharedMutexWritePriority lock in shared mode and an
// independent thread B is waiting for exclusive access. Then a thread C's
// lock_shared can't proceed until A has released the lock. Discounting
// situations that never use exclusive mode (so no lock is necessary at all)
// this means that without higher-level reasoning it is not safe to ignore
// reader <-> reader interactions.
//
// This reasoning does not apply to SharedMutexReadPriority, because there are
// no actions by a thread B that can make C need to wait for A. Since the
// overwhelming majority of SharedMutex instances use write priority, we
// restrict the TSAN annotations to only SharedMutexWritePriority.
#ifndef FOLLY_SHAREDMUTEX_TLS
#if !FOLLY_MOBILE
#define FOLLY_SHAREDMUTEX_TLS FOLLY_TLS
#else
#define FOLLY_SHAREDMUTEX_TLS
#endif
#endif
namespace folly {
struct SharedMutexToken {
enum class Type : uint16_t {
INVALID = 0,
INLINE_SHARED,
DEFERRED_SHARED,
};
Type type_;
uint16_t slot_;
};
namespace detail {
// Returns a guard that gives permission for the current thread to
// annotate, and adjust the annotation bits in, the SharedMutex at ptr.
std::unique_lock<std::mutex> sharedMutexAnnotationGuard(void* ptr);
} // namespace detail
template <
bool ReaderPriority,
typename Tag_ = void,
template <typename> class Atom = std::atomic,
bool BlockImmediately = false,
bool AnnotateForThreadSanitizer = kIsSanitizeThread && !ReaderPriority>
class SharedMutexImpl {
public:
static constexpr bool kReaderPriority = ReaderPriority;
typedef Tag_ Tag;
typedef SharedMutexToken Token;
class FOLLY_NODISCARD ReadHolder;
class FOLLY_NODISCARD UpgradeHolder;
class FOLLY_NODISCARD WriteHolder;
constexpr SharedMutexImpl() noexcept : state_(0) {}
SharedMutexImpl(const SharedMutexImpl&) = delete;
SharedMutexImpl(SharedMutexImpl&&) = delete;
SharedMutexImpl& operator=(const SharedMutexImpl&) = delete;
SharedMutexImpl& operator=(SharedMutexImpl&&) = delete;
// It is an error to destroy an SharedMutex that still has
// any outstanding locks. This is checked if NDEBUG isn't defined.
// SharedMutex's exclusive mode can be safely used to guard the lock's
// own destruction. If, for example, you acquire the lock in exclusive
// mode and then observe that the object containing the lock is no longer
// needed, you can unlock() and then immediately destroy the lock.
// See https://sourceware.org/bugzilla/show_bug.cgi?id=13690 for a
// description about why this property needs to be explicitly mentioned.
~SharedMutexImpl() {
auto state = state_.load(std::memory_order_relaxed);
if (UNLIKELY((state & kHasS) != 0)) {
cleanupTokenlessSharedDeferred(state);
}
if (folly::kIsDebug) {
// These asserts check that everybody has released the lock before it
// is destroyed. If you arrive here while debugging that is likely
// the problem. (You could also have general heap corruption.)
// if a futexWait fails to go to sleep because the value has been
// changed, we don't necessarily clean up the wait bits, so it is
// possible they will be set here in a correct system
assert((state & ~(kWaitingAny | kMayDefer | kAnnotationCreated)) == 0);
if ((state & kMayDefer) != 0) {
for (uint32_t slot = 0; slot < kMaxDeferredReaders; ++slot) {
auto slotValue =
deferredReader(slot)->load(std::memory_order_relaxed);
assert(!slotValueIsThis(slotValue));
(void)slotValue;
}
}
}
annotateDestroy();
}
// Checks if an exclusive lock could succeed so that lock elision could be
// enabled. Different from the two eligible_for_lock_{upgrade|shared}_elision
// functions, this is a conservative check since kMayDefer indicates
// "may-existing" deferred readers.
bool eligible_for_lock_elision() const {
// We rely on the transaction for linearization. Wait bits are
// irrelevant because a successful transaction will be in and out
// without affecting the wakeup. kBegunE is also okay for a similar
// reason.
auto state = state_.load(std::memory_order_relaxed);
return (state & (kHasS | kMayDefer | kHasE | kHasU)) == 0;
}
// Checks if an upgrade lock could succeed so that lock elision could be
// enabled.
bool eligible_for_lock_upgrade_elision() const {
auto state = state_.load(std::memory_order_relaxed);
return (state & (kHasE | kHasU)) == 0;
}
// Checks if a shared lock could succeed so that lock elision could be
// enabled.
bool eligible_for_lock_shared_elision() const {
// No need to honor kBegunE because a transaction doesn't block anybody
auto state = state_.load(std::memory_order_relaxed);
return (state & kHasE) == 0;
}
void lock() {
WaitForever ctx;
(void)lockExclusiveImpl(kHasSolo, ctx);
annotateAcquired(annotate_rwlock_level::wrlock);
}
bool try_lock() {
WaitNever ctx;
auto result = lockExclusiveImpl(kHasSolo, ctx);
annotateTryAcquired(result, annotate_rwlock_level::wrlock);
return result;
}
template <class Rep, class Period>
bool try_lock_for(const std::chrono::duration<Rep, Period>& duration) {
WaitForDuration<Rep, Period> ctx(duration);
auto result = lockExclusiveImpl(kHasSolo, ctx);
annotateTryAcquired(result, annotate_rwlock_level::wrlock);
return result;
}
template <class Clock, class Duration>
bool try_lock_until(
const std::chrono::time_point<Clock, Duration>& absDeadline) {
WaitUntilDeadline<Clock, Duration> ctx{absDeadline};
auto result = lockExclusiveImpl(kHasSolo, ctx);
annotateTryAcquired(result, annotate_rwlock_level::wrlock);
return result;
}
void unlock() {
annotateReleased(annotate_rwlock_level::wrlock);
// It is possible that we have a left-over kWaitingNotS if the last
// unlock_shared() that let our matching lock() complete finished
// releasing before lock()'s futexWait went to sleep. Clean it up now
auto state = (state_ &= ~(kWaitingNotS | kPrevDefer | kHasE));
assert((state & ~(kWaitingAny | kAnnotationCreated)) == 0);
wakeRegisteredWaiters(state, kWaitingE | kWaitingU | kWaitingS);
}
// Managing the token yourself makes unlock_shared a bit faster
void lock_shared() {
WaitForever ctx;
(void)lockSharedImpl(nullptr, ctx);
annotateAcquired(annotate_rwlock_level::rdlock);
}
void lock_shared(Token& token) {
WaitForever ctx;
(void)lockSharedImpl(&token, ctx);
annotateAcquired(annotate_rwlock_level::rdlock);
}
bool try_lock_shared() {
WaitNever ctx;
auto result = lockSharedImpl(nullptr, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
bool try_lock_shared(Token& token) {
WaitNever ctx;
auto result = lockSharedImpl(&token, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Rep, class Period>
bool try_lock_shared_for(const std::chrono::duration<Rep, Period>& duration) {
WaitForDuration<Rep, Period> ctx(duration);
auto result = lockSharedImpl(nullptr, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Rep, class Period>
bool try_lock_shared_for(
const std::chrono::duration<Rep, Period>& duration,
Token& token) {
WaitForDuration<Rep, Period> ctx(duration);
auto result = lockSharedImpl(&token, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Clock, class Duration>
bool try_lock_shared_until(
const std::chrono::time_point<Clock, Duration>& absDeadline) {
WaitUntilDeadline<Clock, Duration> ctx{absDeadline};
auto result = lockSharedImpl(nullptr, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Clock, class Duration>
bool try_lock_shared_until(
const std::chrono::time_point<Clock, Duration>& absDeadline,
Token& token) {
WaitUntilDeadline<Clock, Duration> ctx{absDeadline};
auto result = lockSharedImpl(&token, ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
void unlock_shared() {
annotateReleased(annotate_rwlock_level::rdlock);
auto state = state_.load(std::memory_order_acquire);
// kPrevDefer can only be set if HasE or BegunE is set
assert((state & (kPrevDefer | kHasE | kBegunE)) != kPrevDefer);
// lock() strips kMayDefer immediately, but then copies it to
// kPrevDefer so we can tell if the pre-lock() lock_shared() might
// have deferred
if ((state & (kMayDefer | kPrevDefer)) == 0 ||
!tryUnlockTokenlessSharedDeferred()) {
// Matching lock_shared() couldn't have deferred, or the deferred
// lock has already been inlined by applyDeferredReaders()
unlockSharedInline();
}
}
void unlock_shared(Token& token) {
annotateReleased(annotate_rwlock_level::rdlock);
assert(
token.type_ == Token::Type::INLINE_SHARED ||
token.type_ == Token::Type::DEFERRED_SHARED);
if (token.type_ != Token::Type::DEFERRED_SHARED ||
!tryUnlockSharedDeferred(token.slot_)) {
unlockSharedInline();
}
if (folly::kIsDebug) {
token.type_ = Token::Type::INVALID;
}
}
void unlock_and_lock_shared() {
annotateReleased(annotate_rwlock_level::wrlock);
annotateAcquired(annotate_rwlock_level::rdlock);
// We can't use state_ -=, because we need to clear 2 bits (1 of which
// has an uncertain initial state) and set 1 other. We might as well
// clear the relevant wake bits at the same time. Note that since S
// doesn't block the beginning of a transition to E (writer priority
// can cut off new S, reader priority grabs BegunE and blocks deferred
// S) we need to wake E as well.
auto state = state_.load(std::memory_order_acquire);
do {
assert(
(state & ~(kWaitingAny | kPrevDefer | kAnnotationCreated)) == kHasE);
} while (!state_.compare_exchange_strong(
state, (state & ~(kWaitingAny | kPrevDefer | kHasE)) + kIncrHasS));
if ((state & (kWaitingE | kWaitingU | kWaitingS)) != 0) {
futexWakeAll(kWaitingE | kWaitingU | kWaitingS);
}
}
void unlock_and_lock_shared(Token& token) {
unlock_and_lock_shared();
token.type_ = Token::Type::INLINE_SHARED;
}
void lock_upgrade() {
WaitForever ctx;
(void)lockUpgradeImpl(ctx);
// For TSAN: treat upgrade locks as equivalent to read locks
annotateAcquired(annotate_rwlock_level::rdlock);
}
bool try_lock_upgrade() {
WaitNever ctx;
auto result = lockUpgradeImpl(ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Rep, class Period>
bool try_lock_upgrade_for(
const std::chrono::duration<Rep, Period>& duration) {
WaitForDuration<Rep, Period> ctx(duration);
auto result = lockUpgradeImpl(ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
template <class Clock, class Duration>
bool try_lock_upgrade_until(
const std::chrono::time_point<Clock, Duration>& absDeadline) {
WaitUntilDeadline<Clock, Duration> ctx{absDeadline};
auto result = lockUpgradeImpl(ctx);
annotateTryAcquired(result, annotate_rwlock_level::rdlock);
return result;
}
void unlock_upgrade() {
annotateReleased(annotate_rwlock_level::rdlock);
auto state = (state_ -= kHasU);
assert((state & (kWaitingNotS | kHasSolo)) == 0);
wakeRegisteredWaiters(state, kWaitingE | kWaitingU);
}
void unlock_upgrade_and_lock() {
// no waiting necessary, so waitMask is empty
WaitForever ctx;
(void)lockExclusiveImpl(0, ctx);
annotateReleased(annotate_rwlock_level::rdlock);
annotateAcquired(annotate_rwlock_level::wrlock);
}
void unlock_upgrade_and_lock_shared() {
// No need to annotate for TSAN here because we model upgrade and shared
// locks as the same.
auto state = (state_ -= kHasU - kIncrHasS);
assert((state & (kWaitingNotS | kHasSolo)) == 0);
wakeRegisteredWaiters(state, kWaitingE | kWaitingU);
}
void unlock_upgrade_and_lock_shared(Token& token) {
unlock_upgrade_and_lock_shared();
token.type_ = Token::Type::INLINE_SHARED;
}
void unlock_and_lock_upgrade() {
annotateReleased(annotate_rwlock_level::wrlock);
annotateAcquired(annotate_rwlock_level::rdlock);
// We can't use state_ -=, because we need to clear 2 bits (1 of
// which has an uncertain initial state) and set 1 other. We might
// as well clear the relevant wake bits at the same time.
auto state = state_.load(std::memory_order_acquire);
while (true) {
assert(
(state & ~(kWaitingAny | kPrevDefer | kAnnotationCreated)) == kHasE);
auto after =
(state & ~(kWaitingNotS | kWaitingS | kPrevDefer | kHasE)) + kHasU;
if (state_.compare_exchange_strong(state, after)) {
if ((state & kWaitingS) != 0) {
futexWakeAll(kWaitingS);
}
return;
}
}
}
private:
typedef typename folly::detail::Futex<Atom> Futex;
// Internally we use four kinds of wait contexts. These are structs
// that provide a doWait method that returns true if a futex wake
// was issued that intersects with the waitMask, false if there was a
// timeout and no more waiting should be performed. Spinning occurs
// before the wait context is invoked.
struct WaitForever {
bool canBlock() {
return true;
}
bool canTimeOut() {
return false;
}
bool shouldTimeOut() {
return false;
}
bool doWait(Futex& futex, uint32_t expected, uint32_t waitMask) {
detail::futexWait(&futex, expected, waitMask);
return true;
}
};
struct WaitNever {
bool canBlock() {
return false;
}
bool canTimeOut() {
return true;
}
bool shouldTimeOut() {
return true;
}
bool doWait(
Futex& /* futex */,
uint32_t /* expected */,
uint32_t /* waitMask */) {
return false;
}
};
template <class Rep, class Period>
struct WaitForDuration {
std::chrono::duration<Rep, Period> duration_;
bool deadlineComputed_;
std::chrono::steady_clock::time_point deadline_;
explicit WaitForDuration(const std::chrono::duration<Rep, Period>& duration)
: duration_(duration), deadlineComputed_(false) {}
std::chrono::steady_clock::time_point deadline() {
if (!deadlineComputed_) {
deadline_ = std::chrono::steady_clock::now() + duration_;
deadlineComputed_ = true;
}
return deadline_;
}
bool canBlock() {
return duration_.count() > 0;
}
bool canTimeOut() {
return true;
}
bool shouldTimeOut() {
return std::chrono::steady_clock::now() > deadline();
}
bool doWait(Futex& futex, uint32_t expected, uint32_t waitMask) {
auto result =
detail::futexWaitUntil(&futex, expected, deadline(), waitMask);
return result != folly::detail::FutexResult::TIMEDOUT;
}
};
template <class Clock, class Duration>
struct WaitUntilDeadline {
std::chrono::time_point<Clock, Duration> absDeadline_;
bool canBlock() {
return true;
}
bool canTimeOut() {
return true;
}
bool shouldTimeOut() {
return Clock::now() > absDeadline_;
}
bool doWait(Futex& futex, uint32_t expected, uint32_t waitMask) {
auto result =
detail::futexWaitUntil(&futex, expected, absDeadline_, waitMask);
return result != folly::detail::FutexResult::TIMEDOUT;
}
};
void annotateLazyCreate() {
if (AnnotateForThreadSanitizer &&
(state_.load() & kAnnotationCreated) == 0) {
auto guard = detail::sharedMutexAnnotationGuard(this);
// check again
if ((state_.load() & kAnnotationCreated) == 0) {
state_.fetch_or(kAnnotationCreated);
annotate_benign_race_sized(
&state_, sizeof(state_), "init TSAN", __FILE__, __LINE__);
annotate_rwlock_create(this, __FILE__, __LINE__);
}
}
}
void annotateDestroy() {
if (AnnotateForThreadSanitizer) {
annotateLazyCreate();
annotate_rwlock_destroy(this, __FILE__, __LINE__);
}
}
void annotateAcquired(annotate_rwlock_level w) {
if (AnnotateForThreadSanitizer) {
annotateLazyCreate();
annotate_rwlock_acquired(this, w, __FILE__, __LINE__);
}
}
void annotateTryAcquired(bool result, annotate_rwlock_level w) {
if (AnnotateForThreadSanitizer) {
annotateLazyCreate();
annotate_rwlock_try_acquired(this, w, result, __FILE__, __LINE__);
}
}
void annotateReleased(annotate_rwlock_level w) {
if (AnnotateForThreadSanitizer) {
assert((state_.load() & kAnnotationCreated) != 0);
annotate_rwlock_released(this, w, __FILE__, __LINE__);
}
}
// 32 bits of state
Futex state_{};
// S count needs to be on the end, because we explicitly allow it to
// underflow. This can occur while we are in the middle of applying
// deferred locks (we remove them from deferredReaders[] before
// inlining them), or during token-less unlock_shared() if a racing
// lock_shared();unlock_shared() moves the deferredReaders slot while
// the first unlock_shared() is scanning. The former case is cleaned
// up before we finish applying the locks. The latter case can persist
// until destruction, when it is cleaned up.
static constexpr uint32_t kIncrHasS = 1 << 11;
static constexpr uint32_t kHasS = ~(kIncrHasS - 1);
// Set if annotation has been completed for this instance. That annotation
// (and setting this bit afterward) must be guarded by one of the mutexes in
// annotationCreationGuards.
static constexpr uint32_t kAnnotationCreated = 1 << 10;
// If false, then there are definitely no deferred read locks for this
// instance. Cleared after initialization and when exclusively locked.
static constexpr uint32_t kMayDefer = 1 << 9;
// lock() cleared kMayDefer as soon as it starts draining readers (so
// that it doesn't have to do a second CAS once drain completes), but
// unlock_shared() still needs to know whether to scan deferredReaders[]
// or not. We copy kMayDefer to kPrevDefer when setting kHasE or
// kBegunE, and clear it when clearing those bits.
static constexpr uint32_t kPrevDefer = 1 << 8;
// Exclusive-locked blocks all read locks and write locks. This bit
// may be set before all readers have finished, but in that case the
// thread that sets it won't return to the caller until all read locks
// have been released.
static constexpr uint32_t kHasE = 1 << 7;
// Exclusive-draining means that lock() is waiting for existing readers
// to leave, but that new readers may still acquire shared access.
// This is only used in reader priority mode. New readers during
// drain must be inline. The difference between this and kHasU is that
// kBegunE prevents kMayDefer from being set.
static constexpr uint32_t kBegunE = 1 << 6;
// At most one thread may have either exclusive or upgrade lock
// ownership. Unlike exclusive mode, ownership of the lock in upgrade
// mode doesn't preclude other threads holding the lock in shared mode.
// boost's concept for this doesn't explicitly say whether new shared
// locks can be acquired one lock_upgrade has succeeded, but doesn't
// list that as disallowed. RWSpinLock disallows new read locks after
// lock_upgrade has been acquired, but the boost implementation doesn't.
// We choose the latter.
static constexpr uint32_t kHasU = 1 << 5;
// There are three states that we consider to be "solo", in that they
// cannot coexist with other solo states. These are kHasE, kBegunE,
// and kHasU. Note that S doesn't conflict with any of these, because
// setting the kHasE is only one of the two steps needed to actually
// acquire the lock in exclusive mode (the other is draining the existing
// S holders).
static constexpr uint32_t kHasSolo = kHasE | kBegunE | kHasU;
// Once a thread sets kHasE it needs to wait for the current readers
// to exit the lock. We give this a separate wait identity from the
// waiting to set kHasE so that we can perform partial wakeups (wake
// one instead of wake all).
static constexpr uint32_t kWaitingNotS = 1 << 4;
// When waking writers we can either wake them all, in which case we
// can clear kWaitingE, or we can call futexWake(1). futexWake tells
// us if anybody woke up, but even if we detect that nobody woke up we
// can't clear the bit after the fact without issuing another wakeup.
// To avoid thundering herds when there are lots of pending lock()
// without needing to call futexWake twice when there is only one
// waiter, kWaitingE actually encodes if we have observed multiple
// concurrent waiters. Tricky: ABA issues on futexWait mean that when
// we see kWaitingESingle we can't assume that there is only one.
static constexpr uint32_t kWaitingESingle = 1 << 2;
static constexpr uint32_t kWaitingEMultiple = 1 << 3;
static constexpr uint32_t kWaitingE = kWaitingESingle | kWaitingEMultiple;
// kWaitingU is essentially a 1 bit saturating counter. It always
// requires a wakeAll.
static constexpr uint32_t kWaitingU = 1 << 1;
// All blocked lock_shared() should be awoken, so it is correct (not
// suboptimal) to wakeAll if there are any shared readers.
static constexpr uint32_t kWaitingS = 1 << 0;
// kWaitingAny is a mask of all of the bits that record the state of
// threads, rather than the state of the lock. It is convenient to be
// able to mask them off during asserts.
static constexpr uint32_t kWaitingAny =
kWaitingNotS | kWaitingE | kWaitingU | kWaitingS;
// The reader count at which a reader will attempt to use the lock
// in deferred mode. If this value is 2, then the second concurrent
// reader will set kMayDefer and use deferredReaders[]. kMayDefer is
// cleared during exclusive access, so this threshold must be reached
// each time a lock is held in exclusive mode.
static constexpr uint32_t kNumSharedToStartDeferring = 2;
// The typical number of spins that a thread will wait for a state
// transition. There is no bound on the number of threads that can wait
// for a writer, so we are pretty conservative here to limit the chance
// that we are starving the writer of CPU. Each spin is 6 or 7 nanos,
// almost all of which is in the pause instruction.
static constexpr uint32_t kMaxSpinCount = !BlockImmediately ? 1000 : 2;
// The maximum number of soft yields before falling back to futex.
// If the preemption heuristic is activated we will fall back before
// this. A soft yield takes ~900 nanos (two sched_yield plus a call
// to getrusage, with checks of the goal at each step). Soft yields
// aren't compatible with deterministic execution under test (unlike
// futexWaitUntil, which has a capricious but deterministic back end).
static constexpr uint32_t kMaxSoftYieldCount = !BlockImmediately ? 1000 : 0;
// If AccessSpreader assigns indexes from 0..k*n-1 on a system where some
// level of the memory hierarchy is symmetrically divided into k pieces
// (NUMA nodes, last-level caches, L1 caches, ...), then slot indexes
// that are the same after integer division by k share that resource.
// Our strategy for deferred readers is to probe up to numSlots/4 slots,
// using the full granularity of AccessSpreader for the start slot
// and then search outward. We can use AccessSpreader::current(n)
// without managing our own spreader if kMaxDeferredReaders <=
// AccessSpreader::kMaxCpus, which is currently 128.
//
// Our 2-socket E5-2660 machines have 8 L1 caches on each chip,
// with 64 byte cache lines. That means we need 64*16 bytes of
// deferredReaders[] to give each L1 its own playground. On x86_64
// each DeferredReaderSlot is 8 bytes, so we need kMaxDeferredReaders
// * kDeferredSeparationFactor >= 64 * 16 / 8 == 128. If
// kDeferredSearchDistance * kDeferredSeparationFactor <=
// 64 / 8 then we will search only within a single cache line, which
// guarantees we won't have inter-L1 contention. We give ourselves
// a factor of 2 on the core count, which should hold us for a couple
// processor generations. deferredReaders[] is 2048 bytes currently.
public:
static constexpr uint32_t kMaxDeferredReaders = 64;
static constexpr uint32_t kDeferredSearchDistance = 2;
static constexpr uint32_t kDeferredSeparationFactor = 4;
private:
static_assert(
!(kMaxDeferredReaders & (kMaxDeferredReaders - 1)),
"kMaxDeferredReaders must be a power of 2");
static_assert(
!(kDeferredSearchDistance & (kDeferredSearchDistance - 1)),
"kDeferredSearchDistance must be a power of 2");
// The number of deferred locks that can be simultaneously acquired
// by a thread via the token-less methods without performing any heap
// allocations. Each of these costs 3 pointers (24 bytes, probably)
// per thread. There's not much point in making this larger than
// kDeferredSearchDistance.
static constexpr uint32_t kTokenStackTLSCapacity = 2;
// We need to make sure that if there is a lock_shared()
// and lock_shared(token) followed by unlock_shared() and
// unlock_shared(token), the token-less unlock doesn't null
// out deferredReaders[token.slot_]. If we allowed that, then
// unlock_shared(token) wouldn't be able to assume that its lock
// had been inlined by applyDeferredReaders when it finds that
// deferredReaders[token.slot_] no longer points to this. We accomplish
// this by stealing bit 0 from the pointer to record that the slot's
// element has no token, hence our use of uintptr_t in deferredReaders[].
static constexpr uintptr_t kTokenless = 0x1;
// This is the starting location for Token-less unlock_shared().
static FOLLY_SHAREDMUTEX_TLS uint32_t tls_lastTokenlessSlot;
// Last deferred reader slot used.
static FOLLY_SHAREDMUTEX_TLS uint32_t tls_lastDeferredReaderSlot;
// Only indexes divisible by kDeferredSeparationFactor are used.
// If any of those elements points to a SharedMutexImpl, then it
// should be considered that there is a shared lock on that instance.
// See kTokenless.
public:
typedef Atom<uintptr_t> DeferredReaderSlot;
private:
alignas(hardware_destructive_interference_size) static DeferredReaderSlot
deferredReaders[kMaxDeferredReaders * kDeferredSeparationFactor];
// Performs an exclusive lock, waiting for state_ & waitMask to be
// zero first
template <class WaitContext>
bool lockExclusiveImpl(uint32_t preconditionGoalMask, WaitContext& ctx) {
uint32_t state = state_.load(std::memory_order_acquire);
if (LIKELY(
(state & (preconditionGoalMask | kMayDefer | kHasS)) == 0 &&
state_.compare_exchange_strong(state, (state | kHasE) & ~kHasU))) {
return true;
} else {
return lockExclusiveImpl(state, preconditionGoalMask, ctx);
}
}
template <class WaitContext>
bool lockExclusiveImpl(
uint32_t& state,
uint32_t preconditionGoalMask,
WaitContext& ctx) {
while (true) {
if (UNLIKELY((state & preconditionGoalMask) != 0) &&
!waitForZeroBits(state, preconditionGoalMask, kWaitingE, ctx) &&
ctx.canTimeOut()) {
return false;
}
uint32_t after = (state & kMayDefer) == 0 ? 0 : kPrevDefer;
if (!kReaderPriority || (state & (kMayDefer | kHasS)) == 0) {
// Block readers immediately, either because we are in write
// priority mode or because we can acquire the lock in one
// step. Note that if state has kHasU, then we are doing an
// unlock_upgrade_and_lock() and we should clear it (reader
// priority branch also does this).
after |= (state | kHasE) & ~(kHasU | kMayDefer);
} else {
after |= (state | kBegunE) & ~(kHasU | kMayDefer);
}
if (state_.compare_exchange_strong(state, after)) {
auto before = state;
state = after;
// If we set kHasE (writer priority) then no new readers can
// arrive. If we set kBegunE then they can still enter, but
// they must be inline. Either way we need to either spin on
// deferredReaders[] slots, or inline them so that we can wait on
// kHasS to zero itself. deferredReaders[] is pointers, which on
// x86_64 are bigger than futex() can handle, so we inline the
// deferred locks instead of trying to futexWait on each slot.
// Readers are responsible for rechecking state_ after recording
// a deferred read to avoid atomicity problems between the state_
// CAS and applyDeferredReader's reads of deferredReaders[].
if (UNLIKELY((before & kMayDefer) != 0)) {
applyDeferredReaders(state, ctx);
}
while (true) {
assert((state & (kHasE | kBegunE)) != 0 && (state & kHasU) == 0);
if (UNLIKELY((state & kHasS) != 0) &&
!waitForZeroBits(state, kHasS, kWaitingNotS, ctx) &&
ctx.canTimeOut()) {
// Ugh. We blocked new readers and other writers for a while,
// but were unable to complete. Move on. On the plus side
// we can clear kWaitingNotS because nobody else can piggyback
// on it.
state = (state_ &= ~(kPrevDefer | kHasE | kBegunE | kWaitingNotS));
wakeRegisteredWaiters(state, kWaitingE | kWaitingU | kWaitingS);
return false;
}
if (kReaderPriority && (state & kHasE) == 0) {
assert((state & kBegunE) != 0);
if (!state_.compare_exchange_strong(
state, (state & ~kBegunE) | kHasE)) {
continue;
}
}
return true;
}
}
}
}
template <class WaitContext>
bool waitForZeroBits(
uint32_t& state,
uint32_t goal,
uint32_t waitMask,
WaitContext& ctx) {
uint32_t spinCount = 0;
while (true) {
state = state_.load(std::memory_order_acquire);
if ((state & goal) == 0) {
return true;
}
asm_volatile_pause();
++spinCount;
if (UNLIKELY(spinCount >= kMaxSpinCount)) {
return ctx.canBlock() &&
yieldWaitForZeroBits(state, goal, waitMask, ctx);
}
}
}
template <class WaitContext>
bool yieldWaitForZeroBits(
uint32_t& state,
uint32_t goal,
uint32_t waitMask,
WaitContext& ctx) {
#ifdef RUSAGE_THREAD
struct rusage usage;
std::memset(&usage, 0, sizeof(usage));
long before = -1;
#endif
for (uint32_t yieldCount = 0; yieldCount < kMaxSoftYieldCount;
++yieldCount) {
for (int softState = 0; softState < 3; ++softState) {
if (softState < 2) {
std::this_thread::yield();
} else {
#ifdef RUSAGE_THREAD
getrusage(RUSAGE_THREAD, &usage);
#endif
}
if (((state = state_.load(std::memory_order_acquire)) & goal) == 0) {
return true;
}
if (ctx.shouldTimeOut()) {
return false;
}
}
#ifdef RUSAGE_THREAD
if (before >= 0 && usage.ru_nivcsw >= before + 2) {
// One involuntary csw might just be occasional background work,
// but if we get two in a row then we guess that there is someone
// else who can profitably use this CPU. Fall back to futex
break;
}
before = usage.ru_nivcsw;
#endif
}
return futexWaitForZeroBits(state, goal, waitMask, ctx);
}
template <class WaitContext>
bool futexWaitForZeroBits(
uint32_t& state,
uint32_t goal,
uint32_t waitMask,
WaitContext& ctx) {
assert(
waitMask == kWaitingNotS || waitMask == kWaitingE ||
waitMask == kWaitingU || waitMask == kWaitingS);
while (true) {
state = state_.load(std::memory_order_acquire);
if ((state & goal) == 0) {
return true;
}
auto after = state;
if (waitMask == kWaitingE) {
if ((state & kWaitingESingle) != 0) {
after |= kWaitingEMultiple;
} else {
after |= kWaitingESingle;
}
} else {
after |= waitMask;
}
// CAS is better than atomic |= here, because it lets us avoid
// setting the wait flag when the goal is concurrently achieved
if (after != state && !state_.compare_exchange_strong(state, after)) {
continue;
}
if (!ctx.doWait(state_, after, waitMask)) {
// timed out
return false;
}
}
}
// Wakes up waiters registered in state_ as appropriate, clearing the
// awaiting bits for anybody that was awoken. Tries to perform direct
// single wakeup of an exclusive waiter if appropriate
void wakeRegisteredWaiters(uint32_t& state, uint32_t wakeMask) {
if (UNLIKELY((state & wakeMask) != 0)) {
wakeRegisteredWaitersImpl(state, wakeMask);
}
}
void wakeRegisteredWaitersImpl(uint32_t& state, uint32_t wakeMask) {
// If there are multiple lock() pending only one of them will actually
// get to wake up, so issuing futexWakeAll will make a thundering herd.
// There's nothing stopping us from issuing futexWake(1) instead,
// so long as the wait bits are still an accurate reflection of
// the waiters. If we notice (via futexWake's return value) that
// nobody woke up then we can try again with the normal wake-all path.
// Note that we can't just clear the bits at that point; we need to
// clear the bits and then issue another wakeup.
//
// It is possible that we wake an E waiter but an outside S grabs the
// lock instead, at which point we should wake pending U and S waiters.
// Rather than tracking state to make the failing E regenerate the
// wakeup, we just disable the optimization in the case that there
// are waiting U or S that we are eligible to wake.
if ((wakeMask & kWaitingE) == kWaitingE &&
(state & wakeMask) == kWaitingE &&
detail::futexWake(&state_, 1, kWaitingE) > 0) {
// somebody woke up, so leave state_ as is and clear it later
return;
}
if ((state & wakeMask) != 0) {
auto prev = state_.fetch_and(~wakeMask);
if ((prev & wakeMask) != 0) {
futexWakeAll(wakeMask);
}
state = prev & ~wakeMask;
}
}
void futexWakeAll(uint32_t wakeMask) {
detail::futexWake(&state_, std::numeric_limits<int>::max(), wakeMask);
}
DeferredReaderSlot* deferredReader(uint32_t slot) {
return &deferredReaders[slot * kDeferredSeparationFactor];
}
uintptr_t tokenfulSlotValue() {
return reinterpret_cast<uintptr_t>(this);
}
uintptr_t tokenlessSlotValue() {
return tokenfulSlotValue() | kTokenless;
}
bool slotValueIsThis(uintptr_t slotValue) {
return (slotValue & ~kTokenless) == tokenfulSlotValue();
}
// Clears any deferredReaders[] that point to this, adjusting the inline
// shared lock count to compensate. Does some spinning and yielding
// to avoid the work. Always finishes the application, even if ctx
// times out.
template <class WaitContext>
void applyDeferredReaders(uint32_t& state, WaitContext& ctx) {
uint32_t slot = 0;
uint32_t spinCount = 0;
while (true) {
while (!slotValueIsThis(
deferredReader(slot)->load(std::memory_order_acquire))) {
if (++slot == kMaxDeferredReaders) {
return;
}
}
asm_volatile_pause();
if (UNLIKELY(++spinCount >= kMaxSpinCount)) {
applyDeferredReaders(state, ctx, slot);
return;
}
}
}
template <class WaitContext>
void applyDeferredReaders(uint32_t& state, WaitContext& ctx, uint32_t slot) {
#ifdef RUSAGE_THREAD
struct rusage usage;
std::memset(&usage, 0, sizeof(usage));
long before = -1;
#endif
for (uint32_t yieldCount = 0; yieldCount < kMaxSoftYieldCount;
++yieldCount) {
for (int softState = 0; softState < 3; ++softState) {
if (softState < 2) {
std::this_thread::yield();
} else {
#ifdef RUSAGE_THREAD
getrusage(RUSAGE_THREAD, &usage);
#endif
}
while (!slotValueIsThis(
deferredReader(slot)->load(std::memory_order_acquire))) {
if (++slot == kMaxDeferredReaders) {
return;
}
}
if (ctx.shouldTimeOut()) {
// finish applying immediately on timeout
break;
}
}
#ifdef RUSAGE_THREAD
if (before >= 0 && usage.ru_nivcsw >= before + 2) {
// heuristic says run queue is not empty
break;
}
before = usage.ru_nivcsw;
#endif
}
uint32_t movedSlotCount = 0;
for (; slot < kMaxDeferredReaders; ++slot) {
auto slotPtr = deferredReader(slot);
auto slotValue = slotPtr->load(std::memory_order_acquire);
if (slotValueIsThis(slotValue) &&
slotPtr->compare_exchange_strong(slotValue, 0)) {
++movedSlotCount;
}
}
if (movedSlotCount > 0) {
state = (state_ += movedSlotCount * kIncrHasS);
}
assert((state & (kHasE | kBegunE)) != 0);
// if state + kIncrHasS overflows (off the end of state) then either
// we have 2^(32-9) readers (almost certainly an application bug)
// or we had an underflow (also a bug)
assert(state < state + kIncrHasS);
}
// It is straightfoward to make a token-less lock_shared() and
// unlock_shared() either by making the token-less version always use
// INLINE_SHARED mode or by removing the token version. Supporting
// deferred operation for both types is trickier than it appears, because
// the purpose of the token it so that unlock_shared doesn't have to
// look in other slots for its deferred lock. Token-less unlock_shared
// might place a deferred lock in one place and then release a different
// slot that was originally used by the token-ful version. If this was
// important we could solve the problem by differentiating the deferred
// locks so that cross-variety release wouldn't occur. The best way
// is probably to steal a bit from the pointer, making deferredLocks[]
// an array of Atom<uintptr_t>.
template <class WaitContext>
bool lockSharedImpl(Token* token, WaitContext& ctx) {
uint32_t state = state_.load(std::memory_order_relaxed);
if ((state & (kHasS | kMayDefer | kHasE)) == 0 &&
state_.compare_exchange_strong(state, state + kIncrHasS)) {
if (token != nullptr) {
token->type_ = Token::Type::INLINE_SHARED;
}
return true;
}
return lockSharedImpl(state, token, ctx);
}
template <class WaitContext>
bool lockSharedImpl(uint32_t& state, Token* token, WaitContext& ctx);
// Updates the state in/out argument as if the locks were made inline,
// but does not update state_
void cleanupTokenlessSharedDeferred(uint32_t& state) {
for (uint32_t i = 0; i < kMaxDeferredReaders; ++i) {
auto slotPtr = deferredReader(i);
auto slotValue = slotPtr->load(std::memory_order_relaxed);
if (slotValue == tokenlessSlotValue()) {
slotPtr->store(0, std::memory_order_relaxed);
state += kIncrHasS;
if ((state & kHasS) == 0) {
break;
}
}
}
}
bool tryUnlockTokenlessSharedDeferred();
bool tryUnlockSharedDeferred(uint32_t slot) {
assert(slot < kMaxDeferredReaders);
auto slotValue = tokenfulSlotValue();
return deferredReader(slot)->compare_exchange_strong(slotValue, 0);
}
uint32_t unlockSharedInline() {
uint32_t state = (state_ -= kIncrHasS);
assert(
(state & (kHasE | kBegunE | kMayDefer)) != 0 ||
state < state + kIncrHasS);
if ((state & kHasS) == 0) {
// Only the second half of lock() can be blocked by a non-zero
// reader count, so that's the only thing we need to wake
wakeRegisteredWaiters(state, kWaitingNotS);
}
return state;
}
template <class WaitContext>
bool lockUpgradeImpl(WaitContext& ctx) {
uint32_t state;
do {
if (!waitForZeroBits(state, kHasSolo, kWaitingU, ctx)) {
return false;
}
} while (!state_.compare_exchange_strong(state, state | kHasU));
return true;
}
public:
class FOLLY_NODISCARD ReadHolder {
ReadHolder() : lock_(nullptr) {}
public:
explicit ReadHolder(const SharedMutexImpl* lock)
: lock_(const_cast<SharedMutexImpl*>(lock)) {
if (lock_) {
lock_->lock_shared(token_);
}
}
explicit ReadHolder(const SharedMutexImpl& lock)
: lock_(const_cast<SharedMutexImpl*>(&lock)) {
lock_->lock_shared(token_);
}
ReadHolder(ReadHolder&& rhs) noexcept
: lock_(rhs.lock_), token_(rhs.token_) {
rhs.lock_ = nullptr;
}
// Downgrade from upgrade mode
explicit ReadHolder(UpgradeHolder&& upgraded) : lock_(upgraded.lock_) {
assert(upgraded.lock_ != nullptr);
upgraded.lock_ = nullptr;
lock_->unlock_upgrade_and_lock_shared(token_);
}
// Downgrade from exclusive mode
explicit ReadHolder(WriteHolder&& writer) : lock_(writer.lock_) {
assert(writer.lock_ != nullptr);
writer.lock_ = nullptr;
lock_->unlock_and_lock_shared(token_);
}
ReadHolder& operator=(ReadHolder&& rhs) noexcept {
std::swap(lock_, rhs.lock_);
std::swap(token_, rhs.token_);
return *this;
}
ReadHolder(const ReadHolder& rhs) = delete;
ReadHolder& operator=(const ReadHolder& rhs) = delete;
~ReadHolder() {
unlock();
}
void unlock() {
if (lock_) {
lock_->unlock_shared(token_);
lock_ = nullptr;
}
}
private:
friend class UpgradeHolder;
friend class WriteHolder;
SharedMutexImpl* lock_;
SharedMutexToken token_;
};
class FOLLY_NODISCARD UpgradeHolder {
UpgradeHolder() : lock_(nullptr) {}
public:
explicit UpgradeHolder(SharedMutexImpl* lock) : lock_(lock) {
if (lock_) {
lock_->lock_upgrade();
}
}
explicit UpgradeHolder(SharedMutexImpl& lock) : lock_(&lock) {
lock_->lock_upgrade();
}
// Downgrade from exclusive mode
explicit UpgradeHolder(WriteHolder&& writer) : lock_(writer.lock_) {
assert(writer.lock_ != nullptr);
writer.lock_ = nullptr;
lock_->unlock_and_lock_upgrade();
}
UpgradeHolder(UpgradeHolder&& rhs) noexcept : lock_(rhs.lock_) {
rhs.lock_ = nullptr;
}
UpgradeHolder& operator=(UpgradeHolder&& rhs) noexcept {
std::swap(lock_, rhs.lock_);
return *this;
}
UpgradeHolder(const UpgradeHolder& rhs) = delete;
UpgradeHolder& operator=(const UpgradeHolder& rhs) = delete;
~UpgradeHolder() {
unlock();
}
void unlock() {
if (lock_) {
lock_->unlock_upgrade();
lock_ = nullptr;
}
}
private:
friend class WriteHolder;
friend class ReadHolder;
SharedMutexImpl* lock_;
};
class FOLLY_NODISCARD WriteHolder {
WriteHolder() : lock_(nullptr) {}
public:
explicit WriteHolder(SharedMutexImpl* lock) : lock_(lock) {
if (lock_) {
lock_->lock();
}
}
explicit WriteHolder(SharedMutexImpl& lock) : lock_(&lock) {
lock_->lock();
}
// Promotion from upgrade mode
explicit WriteHolder(UpgradeHolder&& upgrade) : lock_(upgrade.lock_) {
assert(upgrade.lock_ != nullptr);
upgrade.lock_ = nullptr;
lock_->unlock_upgrade_and_lock();
}
// README:
//
// It is intended that WriteHolder(ReadHolder&& rhs) do not exist.
//
// Shared locks (read) can not safely upgrade to unique locks (write).
// That upgrade path is a well-known recipe for deadlock, so we explicitly
// disallow it.
//
// If you need to do a conditional mutation, you have a few options:
// 1. Check the condition under a shared lock and release it.
// Then maybe check the condition again under a unique lock and maybe do
// the mutation.
// 2. Check the condition once under an upgradeable lock.
// Then maybe upgrade the lock to a unique lock and do the mutation.
// 3. Check the condition and maybe perform the mutation under a unique
// lock.
//
// Relevant upgradeable lock notes:
// * At most one upgradeable lock can be held at a time for a given shared
// mutex, just like a unique lock.
// * An upgradeable lock may be held concurrently with any number of shared
// locks.
// * An upgradeable lock may be upgraded atomically to a unique lock.
WriteHolder(WriteHolder&& rhs) noexcept : lock_(rhs.lock_) {
rhs.lock_ = nullptr;
}
WriteHolder& operator=(WriteHolder&& rhs) noexcept {
std::swap(lock_, rhs.lock_);
return *this;
}
WriteHolder(const WriteHolder& rhs) = delete;
WriteHolder& operator=(const WriteHolder& rhs) = delete;
~WriteHolder() {
unlock();
}
void unlock() {
if (lock_) {
lock_->unlock();
lock_ = nullptr;
}
}
private:
friend class ReadHolder;
friend class UpgradeHolder;
SharedMutexImpl* lock_;
};
// Adapters for Synchronized<>
friend void acquireRead(SharedMutexImpl& lock) {
lock.lock_shared();
}
friend void acquireReadWrite(SharedMutexImpl& lock) {
lock.lock();
}
friend void releaseRead(SharedMutexImpl& lock) {
lock.unlock_shared();
}
friend void releaseReadWrite(SharedMutexImpl& lock) {
lock.unlock();
}
friend bool acquireRead(SharedMutexImpl& lock, unsigned int ms) {
return lock.try_lock_shared_for(std::chrono::milliseconds(ms));
}
friend bool acquireReadWrite(SharedMutexImpl& lock, unsigned int ms) {
return lock.try_lock_for(std::chrono::milliseconds(ms));
}
};
typedef SharedMutexImpl<true> SharedMutexReadPriority;
typedef SharedMutexImpl<false> SharedMutexWritePriority;
typedef SharedMutexWritePriority SharedMutex;
typedef SharedMutexImpl<false, void, std::atomic, false, false>
SharedMutexSuppressTSAN;
// Prevent the compiler from instantiating these in other translation units.
// They are instantiated once in SharedMutex.cpp
extern template class SharedMutexImpl<true>;
extern template class SharedMutexImpl<false>;
template <
bool ReaderPriority,
typename Tag_,
template <typename> class Atom,
bool BlockImmediately,
bool AnnotateForThreadSanitizer>
alignas(hardware_destructive_interference_size) typename SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::DeferredReaderSlot
SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::deferredReaders
[kMaxDeferredReaders * kDeferredSeparationFactor] = {};
template <
bool ReaderPriority,
typename Tag_,
template <typename> class Atom,
bool BlockImmediately,
bool AnnotateForThreadSanitizer>
FOLLY_SHAREDMUTEX_TLS uint32_t SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::tls_lastTokenlessSlot = 0;
template <
bool ReaderPriority,
typename Tag_,
template <typename> class Atom,
bool BlockImmediately,
bool AnnotateForThreadSanitizer>
FOLLY_SHAREDMUTEX_TLS uint32_t SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::tls_lastDeferredReaderSlot = 0;
template <
bool ReaderPriority,
typename Tag_,
template <typename> class Atom,
bool BlockImmediately,
bool AnnotateForThreadSanitizer>
bool SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::tryUnlockTokenlessSharedDeferred() {
auto bestSlot =
make_atomic_ref(tls_lastTokenlessSlot).load(std::memory_order_relaxed);
for (uint32_t i = 0; i < kMaxDeferredReaders; ++i) {
auto slotPtr = deferredReader(bestSlot ^ i);
auto slotValue = slotPtr->load(std::memory_order_relaxed);
if (slotValue == tokenlessSlotValue() &&
slotPtr->compare_exchange_strong(slotValue, 0)) {
make_atomic_ref(tls_lastTokenlessSlot)
.store(bestSlot ^ i, std::memory_order_relaxed);
return true;
}
}
return false;
}
template <
bool ReaderPriority,
typename Tag_,
template <typename> class Atom,
bool BlockImmediately,
bool AnnotateForThreadSanitizer>
template <class WaitContext>
bool SharedMutexImpl<
ReaderPriority,
Tag_,
Atom,
BlockImmediately,
AnnotateForThreadSanitizer>::
lockSharedImpl(uint32_t& state, Token* token, WaitContext& ctx) {
while (true) {
if (UNLIKELY((state & kHasE) != 0) &&
!waitForZeroBits(state, kHasE, kWaitingS, ctx) && ctx.canTimeOut()) {
return false;
}
uint32_t slot = make_atomic_ref(tls_lastDeferredReaderSlot)
.load(std::memory_order_relaxed);
uintptr_t slotValue = 1; // any non-zero value will do
bool canAlreadyDefer = (state & kMayDefer) != 0;
bool aboveDeferThreshold =
(state & kHasS) >= (kNumSharedToStartDeferring - 1) * kIncrHasS;
bool drainInProgress = ReaderPriority && (state & kBegunE) != 0;
if (canAlreadyDefer || (aboveDeferThreshold && !drainInProgress)) {
/* Try using the most recent slot first. */
slotValue = deferredReader(slot)->load(std::memory_order_relaxed);
if (slotValue != 0) {
// starting point for our empty-slot search, can change after
// calling waitForZeroBits
uint32_t bestSlot =
(uint32_t)folly::AccessSpreader<Atom>::current(kMaxDeferredReaders);
// deferred readers are already enabled, or it is time to
// enable them if we can find a slot
for (uint32_t i = 0; i < kDeferredSearchDistance; ++i) {
slot = bestSlot ^ i;
assert(slot < kMaxDeferredReaders);
slotValue = deferredReader(slot)->load(std::memory_order_relaxed);
if (slotValue == 0) {
// found empty slot
make_atomic_ref(tls_lastDeferredReaderSlot)
.store(slot, std::memory_order_relaxed);
break;
}
}
}
}
if (slotValue != 0) {
// not yet deferred, or no empty slots
if (state_.compare_exchange_strong(state, state + kIncrHasS)) {
// successfully recorded the read lock inline
if (token != nullptr) {
token->type_ = Token::Type::INLINE_SHARED;
}
return true;
}
// state is updated, try again
continue;
}
// record that deferred readers might be in use if necessary
if ((state & kMayDefer) == 0) {
if (!state_.compare_exchange_strong(state, state | kMayDefer)) {
// keep going if CAS failed because somebody else set the bit
// for us
if ((state & (kHasE | kMayDefer)) != kMayDefer) {
continue;
}
}
// state = state | kMayDefer;
}
// try to use the slot
bool gotSlot = deferredReader(slot)->compare_exchange_strong(
slotValue,
token == nullptr ? tokenlessSlotValue() : tokenfulSlotValue());
// If we got the slot, we need to verify that an exclusive lock
// didn't happen since we last checked. If we didn't get the slot we
// need to recheck state_ anyway to make sure we don't waste too much
// work. It is also possible that since we checked state_ someone
// has acquired and released the write lock, clearing kMayDefer.
// Both cases are covered by looking for the readers-possible bit,
// because it is off when the exclusive lock bit is set.
state = state_.load(std::memory_order_acquire);
if (!gotSlot) {
continue;
}
if (token == nullptr) {
make_atomic_ref(tls_lastTokenlessSlot)
.store(slot, std::memory_order_relaxed);
}
if ((state & kMayDefer) != 0) {
assert((state & kHasE) == 0);
// success
if (token != nullptr) {
token->type_ = Token::Type::DEFERRED_SHARED;
token->slot_ = (uint16_t)slot;
}
return true;
}
// release the slot before retrying
if (token == nullptr) {
// We can't rely on slot. Token-less slot values can be freed by
// any unlock_shared(), so we need to do the full deferredReader
// search during unlock. Unlike unlock_shared(), we can't trust
// kPrevDefer here. This deferred lock isn't visible to lock()
// (that's the whole reason we're undoing it) so there might have
// subsequently been an unlock() and lock() with no intervening
// transition to deferred mode.
if (!tryUnlockTokenlessSharedDeferred()) {
unlockSharedInline();
}
} else {
if (!tryUnlockSharedDeferred(slot)) {
unlockSharedInline();
}
}
// We got here not because the lock was unavailable, but because
// we lost a compare-and-swap. Try-lock is typically allowed to
// have spurious failures, but there is no lock efficiency gain
// from exploiting that freedom here.
}
}
} // namespace folly