View Issue Details

IDProjectCategoryView StatusLast Update
0017588MMW 5Playbackpublic2021-03-01 22:50
Reporterrusty Assigned To 
PriorityurgentSeveritycrashReproducibilityunable to reproduce
Status closedResolutionfixed 
Product Version5.0 
Target Version5.0Fixed in Version5.0 
Summary0017588: Random crash 82BD0000 (during scanning) - regression 2313
DescriptionThis is a non-reproducible crash that occurred after the following series of steps with build 2314:

0 Run MM5
--> Podcasts update and volume is automatically leveled
1 Selected 5 YT videos and pressed ENTER to initiate playback
2 D&D several other music videos to the NP list
3 D&D several other audio tracks to the NP list
4 Pressed Next once or twice
--> whitescreen + crashlog 82BD0000

Note: videos were playing in the Preview window

I've been unable to replicate this despite trying numerous times, so resolve if the log doesn't give enough info.
TagsNo tags attached.
Fixed in build2315

Activities

rusty

2021-02-21 02:13

administrator   ~0062063

Drakinite also mentioned having experienced this crash during the scanning process.

Ludek

2021-02-21 17:27

developer   ~0062071

Last edited: 2021-02-21 21:18

This kind of crash (82BD0000) is new, it was observed first time in 2313 by Drakinite and in 2314 it came from five different users during this day!
So probably a regression, just the callstack does not say anything :-/

EExternalException and repeated 0.000s - PID:0 - in the DbgView part, we need to figure out what it is and why this started to happen.
If anyone is able to replicate then also standard debug log (DbgView started prior to MM5 start) might help here -- or a trigger for this issue / screencast.

Ludek

2021-02-21 17:38

developer   ~0062072

Last edited: 2021-02-21 17:44

As the EExternalException is thrown in WaitOrTimerCallback > line tmr.execute(TTimer(tmr).FReleaseAfterCallback) then it could be an exception inside the TTimer.execute, just Eureka probably does not show us full callstack.
I remember that Michal had similar issue in the past and enabling MEMCHECKING (FastMM) somehow forced Eureka to give us a better callstack.

So the solution would be to compile testing build (with MEMECHECKING enabled) for a person that can replicate crash 82BD0000 to generate the new crash log with the special build.

Assigned to Petr to create this special testing build -- just note that this build will be much slower and can observe further issues (as Eureka is known to not work correctly with FastMM)

Ludek

2021-02-21 21:26

developer   ~0062083

ok, assigned to Rusty (or anyone that can replicate) to generate crash log with the special build from /staff_files/forLudek

Ludek

2021-02-22 09:18

developer   ~0062085

Assigned back to me. After some code revisions I most probably see the reason for this crash now.

Ludek

2021-02-22 15:20

developer   ~0062088

Commited a fix for my hypothesis why this crash occurs.

Must be confirmed in 2315. Anyhow if anyone of you can replicate crash 82BD0000 let me know.

Ludek

2021-02-23 10:47

developer   ~0062107

Last edited: 2021-02-23 19:06

1) By testing the special build with MEMECHECK I experienced crash A04C8504 on the first time wizard --> fixed in 2315
2) I also found some issues related to termination of scanning task --> also fixed in 2315

3) Re-scanning the same (already scanned) location seems slow to me, this was caused by:
a) performance leak while reading XSPF playlist content => Fixed in 2315
b) Enabled '[x] Analyze files for duplicates (takes extra time)' option

Ludek

2021-02-23 22:54

developer   ~0062123

I have made some changes that could help and added some debug strings.
Resolved for testing in upcoming build and generate new logs (in case the bug still appears)

peke

2021-02-25 08:42

developer   ~0062132

Tested 2315 and unable to replicate, I even left 15+ YT mixed with Audio tracks to test if there is an memory leak that could cause crash. No issues.

Ludek

2021-03-01 22:50

developer   ~0062187

The crash 82BD0000 is still coming from 2314, but there isn't a single such a crash from 2316 -- so the bug is resolved.