Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RLMException - mmap() failed: Cannot allocate memory size: #8523

Closed
kirin233x opened this issue Mar 20, 2024 · 4 comments
Closed

RLMException - mmap() failed: Cannot allocate memory size: #8523

kirin233x opened this issue Mar 20, 2024 · 4 comments
Assignees
Labels
Encryption:On Frequency:Always More-information-needed More information is needed to progress. The issue will close automatically in 2 weeks. O-Community Repro:No SDK-Use:Local T-Bug Waiting-For-Reporter Waiting for more information from the reporter before we can proceed

Comments

@kirin233x
Copy link

kirin233x commented Mar 20, 2024

How frequently does the bug occur?

Always

Description

In the scenario I use, I have several queues, the main thread, the work serial queue, and some child threads. The main thread mainly listens for data changes through the realm observe and feedback on the UI, and the work serial queue is used to perform data write operations. The child thread is mainly used to read realm data while working on business scenarios. It rarely changes, and if there is a change, I will put it in the work queue. But there are errors related to # mmap() failed: Cannot allocate memory size: and bad::alloc, and I want to know how I can modify my code to reduce the occurrence of these errors.

I have a few questions:

  1. When we use realm, is the space mapped by mmap the size of realm file? Does frequent reading affect the space change of mmap?
  2. I added autoreleasepool to all realm instances, but it didn't seem to help.
    private func initRealm(encryptionKey:Data?, onInitException:((String, String)->Void)? = nil)->Realm? {
        autoreleasepool {
            do {
                // Open the default realm for default configuration
                _ = try? Realm()
                var config = Realm.Configuration.defaultConfiguration
                config.fileURL!.deleteLastPathComponent()
                config.fileURL!.appendPathComponent(self.fileName)
                config.fileURL!.appendPathExtension("realm")
                config.encryptionKey = encryptionKey
                config.readOnly = readOnly
                config.schemaVersion = UInt64(schemaVersion)
                config.migrationBlock = migration
                config.shouldCompactOnLaunch = { totalBytes, usedBytes in
                    let oneHundredMB = 100 * 1024 * 1024
                    return (totalBytes > oneHundredMB) && (Double(usedBytes) / Double(totalBytes)) < 0.5
                }
                return try Realm(configuration: config, queue: queue)
            } catch(let error) {
                NSLog("An error occurred. Database initialization failed. realm:\(self.fileName), reason:\(error)")
                let errorDesc = error.localizedDescription
                onInitException?(self.fileName, errorDesc)
            }
            return nil
        }
    }

But there are some databases that read frequently, and I will try to hold these relam handles in different threads.This was also done when we used version 10.13, but this error did not occur.

3、Is there any way to monitor how much mmap space relam is using?So I can try to do some analysis from memory.

From what we can see, it seems to be an issue after realm was upgraded from 10.13 to 10.43, but we don't know what possible cause affected this at this time. It would be better if you could help me provide some analytical ideas, thank you!

Stacktrace & log output

I can't reproduce it, it exists in our production environment version

Can you reproduce the bug?

No

Reproduction Steps

No response

Version

10.43

What Atlas Services are you using?

Local Database only

Are you using encryption?

Yes

Platform OS and version(s)

ios17.1.2

Build environment

Xcode version: 15.2

Copy link

sync-by-unito bot commented Mar 20, 2024

➤ PM Bot commented:

Jira ticket: RCOCOA-2315

@andreasley
Copy link

I've ran into similar issues with a large Realm database (> 5GB). Despite lots of free memory, allocation failed in various ways.

The solution was to add the entitlement Extended Virtual Addressing to the app.

@nirinchev
Copy link
Member

Adding the entitlement is one way to mitigate the issue. Another is to understand what's causing Realm files to be open for extended periods on background threads. You can have a Realm instance open on the main thread for the lifetime of your app, but Realms open on background threads should be closed as soon as possible or Realm.refresh() should be called at opportune times. Without a simple repro case, it'd be difficult to say what the source of the problem is.

@nirinchev nirinchev added the More-information-needed More information is needed to progress. The issue will close automatically in 2 weeks. label Apr 22, 2024
@sync-by-unito sync-by-unito bot added the Waiting-For-Reporter Waiting for more information from the reporter before we can proceed label Apr 22, 2024
Copy link

github-actions bot commented May 8, 2024

This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.

@github-actions github-actions bot closed this as completed May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Encryption:On Frequency:Always More-information-needed More information is needed to progress. The issue will close automatically in 2 weeks. O-Community Repro:No SDK-Use:Local T-Bug Waiting-For-Reporter Waiting for more information from the reporter before we can proceed
Projects
None yet
Development

No branches or pull requests

3 participants