Microsoft Exchange Server 2013 DAG : Failed to notify source server about the local truncation point

I was recently tasked with a project that included migrating an Exchange Server 2010 environment to an Exchange Server 2013 one, fast forward everything was migrated to a single server the second server was setup and configured properly as well and it was time to add the copies of the mailbox databases.

The customer had a requirement to have circular logging enabled, but before adding a mailbox database copy circular logging must be disabled.

While attempting to add one of the databases I got the following error:

The seeding operation failed. Error: An error occurred while performing the seed operation. Error: Failed to notify source server ‘ExchangeServer FQDN’ about the local truncation point. Hresult: 0xc8000713. Error: Unable to find the file. [Database: DATABASE, Server: ExchangeServer FQDN]

And the status of the database copy is failed and suspended

I tried adding another database thinking that this might be related to this single database and the results were the same. I did a quick research on the subject and most of the responses were that you should:

  1. Dismount the database.
  2. Run eseutil with the /mh parameters (more info here).

I had a lot of databases and dismounting the databases to run the utility was merely impossible even on a weekend, on the other hand as I was continuing my research I found a thread on Technet which mentions a long side to running the eseutil “moving the logs” to another directory so that they will get recreated and here it hit me!

What if I re-enable circular logging then try to update the database! Because since circular logging was enabled I am already missing on a lot of the transactions that were initially written on the database!

  • I tried first hitting the update option but sadly I got the same error
  • I tried hitting resume option which resulted again in the failed and suspended status then hit the update option again.
  • AND IT WORKED :-D

I was glad that I did not have to go through the eseutil havoc as it would have bumped the project at least 1 week (going through all databases that is), I hope this helps!

(Abdullah)^2

24544 Total Views 1 Views Today

Abdullah

Knowledge is limitless.

20 Responses

  1. Edgar says:

    Greats,, I was with this same problem,, I did the same thing and it working.
    Tks

  2. Eliran says:

    Thank you !

  3. George Tzikas says:

    how long before you tried to reseed, after turning on circular logging? was it straight away or next day?

  4. Akwasi Agyarko says:

    Hello, you are a life saver…i have been sitting here pulling my ears till i stumbled on you post.

    It worked for me aswell

  5. fernando says:

    Sweet!! I’m fairly new to Exchange back-end stuff and ran into this about an hour ago. I feared I would be spending hours and hours troubleshooting, but your page came up 3rd in my search and took me right to a solution. Thanks for sharing, you saved me a ton of time!!

  6. iT Works says:

    Tnx, it works:
    When second copy where in failed state, enable Circular logging – no dismount needed, because exchange thinks, that there is a copy already.
    Then Resume.
    Then Update.

  7. Courtney Piratzky says:

    I can’t believe this worked. Thanks so much for posting!!

  8. Jabulani says:

    Worked like a charm, but had to disable the DAG network adapters first

  9. Boris B. says:

    thankyou, worked for me.

  10. kashef says:

    Thank You Very Much.

    You are life saver hero

  11. Tresor Nshuti says:

    thank you very much @Abudallah,
    also followed the steps you shared and it worked perfectly well on exchange 2016

  12. Mariana says:

    I have the same problem on exchange 2019. I have read many posts with very complex scenarios. The following steps work perfectly. I can’t believe it: UPDATE, RESUME, UPDATE and it starts copying. Thank you very much

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.