Quantcast
Channel: Symantec Connect - Backup and Recovery - Discussions
Viewing all 2857 articles
Browse latest View live

CentOS as Netbackup Master Server

$
0
0
Oui, j'ai besoin d'une solution

Hi All,

One of our customers wants to install another Netbackup on their environment to backup other servers. They already have an existing netbackup v7.6 but its located on other server farm. Its a capacity based licensing, so as far as i know, they still haven't reach their capacity

So they want to install a NBU Master on a Cent OS v6. I'm planning to install NBU 7.1 on the CentOS 6, since based on NBU SCL, only NBU 7.1 is available to install as a master server. Can I use their NBU v7.6 license on NBU v7.1? are there any conflicts?

Please provide me a direct link to where to download NBU 7.1

Thanks

1429856151

DLO 8.0 beta version install

$
0
0
Oui, j'ai besoin d'une solution

hello,everybogy:

here is my install environment DNS IS OK.

1. win2008 server DC

IP:10.10.1.137

DOMAIN: narmada.com

login with narmada\administrator

2.win2008 server : will install dlo server ,and sql server 2008 64bit is installed. sql instance is test.

QQ截图20150424111001.png

IP:10.10.1.123

login with narmada\administrator

sql services are also run under narmada\administrator,and named pipe,tcp/ip is on.

run dlo setup.exe,occur this information:

QQ截图20150424111052.png

i have already checked pwd is ok,and the domian user is narmada\administrator,but the error information is also occur.

How long will BE set the job as missed if it did not start according to schedule in BE2010

$
0
0
Oui, j'ai besoin d'une solution

How long will BE set the  job as missed if it did not start according to schedule in BE2010 ?

How long BE2010 will retry the missed job? Can customize the time in BE2010?

AIX Netbackup CatalogBackup.lck with world writable

$
0
0
Oui, j'ai besoin d'une solution

Hi,

We are using  AIX 7.1 server and Netbackup 7.5. Only root user has access to Netbackup for backup and recovery. Root umask is set at 027, by right this would prevent any netbackup files generated to have the world writable format. However we found a few CatalogBackup.lck files which have a world writable file format in /usr/openv/netbackup/db/images/... directory. Could anyone help to suggest a permanent solution for this, since our audit requirement would not allow any world writable file.

Thank You.

The date of end of life

$
0
0
Oui, j'ai besoin d'une solution

I was told that some issue in BE2014 was fixed in BE2015. May I know the end of life date of BE2014?  Will Symantec continue to provide patch for BE2014 SP2? Thanks

1429856319

Invalid Physical Volume Library Drive Identifier.

$
0
0
Oui, j'ai besoin d'une solution

Hi good morning,

i need to restore some *.jpeg files (pls refer to attachment 69608.jpg)

when i click finish in order to start the restoration job, i will received this message (please refer to symatec_restore_error.jpg)

Kindly advice us why this error occur and the solution for this problem ?

Thank you.

upgrade from 7.5.0.7 to 7.6

$
0
0
Oui, j'ai besoin d'une solution

Hi,

kindly guide me detailed steps for upgrading from 7.5.0.7 to 7.6 as i check with my fileconnect i could see 7.6.1.

so can i upgrade from 7.5.0.7 to 7.6.1?  or first 7.6 then 7.6.0.1 ?

please suggest me for the same.if possible share me link to download 7.6 setup.

My computer caught fire…

$
0
0
Oui, j'ai besoin d'une solution

I was advised to repost this topic here by Symantec...

All I have got left is the external drive, (USB, 1000 GB), containing the backups.

I wasn't unduly worried – after all this is why you have backups isn't it? 

The computer was fairly old and had two IDE drives with Windows 7 and a matching motherboard with an older type of RAM. The backup was made with Ghost 12 – I had no idea that it had become obsolete.

I bought a new computer with two SATA drives. Apparently IDE drives and matching motherboards boards are no longer made. I booted from the SRD but the machine couldn't see the USB drive. I installed Windows 7 and then it could see the USB drive when I booted from the SRD. I told to restore the C drive after which it wouldn't boot from the drive.

I subsequently chatted with Symantec support who told me that if I installed the appropriate Ghost application on the new machine it would be possible to restore from my USB drive. The agent insisted that in the whole of Symantec there was not a single remaining copy of any version of ghost…

I managed to find a "Try and buy" copy of Ghost 15. Will this work? If not what can I do?

The chatline agent did not know the answer to these questions and referred me to this forum.

David


Moving Advance Disk(Backup Destination) to Deduplication Disk (5230 Appliance)

$
0
0
Oui, j'ai besoin d'une solution

Dear All,

Currently our environment as below:-

1. NetBackup Master Server

- Windows 2008 Std R2

- NBU version 7.6.1.1

- Local disk, configured as advance disk as backup destination

2. NetBackup 5230 Appliance

- act as media server for the above master server.

- firmware 2.6.1.1

- deduplication pool/stu

We plan to migrate the advance disk data to the deduplication pool. Any tools or suggestion or method which you guys can share?

Or maybe some of you guys out there has already have experience in doing this activity?

Thanks! :)

Upgrade BE 2010 to BE 15. Any way to keep the history (logs)?

$
0
0
Oui, j'ai besoin d'une solution

I am thinking on upgrading the BE 2010 to the latest verision available.

Is it possible to keep not only the catalogs, DB and jobs but also the job history?

Thank you

Scheduling Issues

$
0
0
Oui, j'ai besoin d'une solution

Hi All,

I need your inputs on one of the issue I am currently facing related to scheduling.

Setup:-

Master Server [linux 2.6]

Media Server [Linux 2.6]

NBU Version [7.6.0.4] on both master and media server.

Admin console installed on [Windows 2008 R2 enterprise Machine]

Issue:- I need to understand which time would the backup schedules follow. I am configuring the backups through an admin console installed on a windows machine and the machine is located in EST. However the actual master and media server are located in CST time zone. Now when i configure the backup policy through an admin cosole which is installed on a windows machine located in EST time zone. Which time zone would my policies would follow.

Options 

1) MASTER SERVER time zone which is CST

2) Admin console Server which is EST.

I am bit confused as I see schedules running or triggerening at a random time and hence i am confused. Please let me know your suggestions. 

DLOCommandu -EmergencyRestore Error

$
0
0
Oui, j'ai besoin d'une solution

Hi all,

Having some issues with restoring data using DLOCommandu -EmergencyRestore.

We have previously changed netowrk locations for our NUDF and also upgraded to DLO 7.6.

When running -

dlocommandu -emergencyrestore "\\DLOServer\DLO NUDF\COMPANYDOMAIN-user5\.dlo" -W mypassword -AP \\DLOServer\temp -i

From the original location or any location we recieve the following error -

User share path not found. The user share path format could be different from the path configured in Symantec DLO.

Anyone know of a solution to this issue?

Many thanks.

Doc for Symantec System Recovery 2013 Management Solution

$
0
0
Oui, j'ai besoin d'une solution

Hello,

where can i find a doc for SSR MS for Customer, e. g. How can i creator a Backup Job On the ssr sm Webpage?

Not a doc for installing the ssr sm.

Kind regards
Blacksun

0

Doing incremental backups on a NAS device.

$
0
0
Oui, j'ai besoin d'une solution

I have created a Backup job for a Synology NAS device. The full backup runs with no issues, but when it is time for an incremental backup it just preforms a full backup again.

I am using Backup exec 2014 currently. Just curious as to what I may have missed to get this working correctly.

Dan

BE2014 Optimized Duplication - 'Loading Media - Duplicate' for hours

$
0
0
Oui, j'ai besoin d'une solution

Hi There,

I have BE2014 on the latest version, and my architecture is as follows:

-Production Environment, with CASO, Duplication Add-on, and a local dedupe store on this windows 2012 R2 server.  (no extra devices connected)

-DR environment, with a BE managed server out there, and Deduplication Add-on (no extra devices involved out there either). I have a dedupe store shared out on this managed server (shared with the CASO server in Production)

-At present, there is a 1GB LAN connection between the two servers/sites.

I have my backup jobs defined on the CASO server in production, and the data is being saved to the dedupe store in Production just fine.

I have a stage added to duplicate the data over to the shared store over in DR, but the jobs are taking forever (too long for the backup window at present).

Every time i check in on it, it's almost constantly stuck on 'Loading Media - Duplicate'

Anyone got any ideas?

Thanks in advance


Replicate Adv Disk backup between remote masters

$
0
0
Oui, j'ai besoin d'une solution

I have a situation where I need to backup 3TB of compressed data on a regular basis.  I have two 5220 appliances both acting as Master/Media servers.  I really don't want it sitting in my Dedupe pool, so I'm trying to take advantage of the 3.5TB of Advanced Disk storage that I have.  What I would like to do is to have one appliance backup the data and keep it for a week and have it copied over to the remote master and stay for another week, about 2 weeks total retention.  Any thoughts on how to do this?  I thought I could back it up to the Dedup pool on the local master, duplicate it to AdvDisk and replicate to the remote master, but that doesn't seem to be working.  I know I will have to use the Dedup pool for AIR, but I want that backup to expire immediately if possible.

Event 1023 Windows cannot load the extensible counter DLL Backup Exec.

$
0
0
Oui, j'ai besoin d'une solution

I am getting the following events from our newly build server 2012 r2 server with BE 15. I am not sure what is causing this issue, but it appears to be related to BE and performance counters.

Log Name:      Application
Source:        Microsoft-Windows-Perflib
Date:          4/24/2015 11:02:17 AM
Event ID:      1008
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      XXXXXXX
Description:
The Open Procedure for service "BITS" in DLL "C:\Windows\System32\bitsperf.dll" failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
    <EventID Qualifiers="49152">1008</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-04-24T18:02:17.000000000Z" />
    <EventRecordID>3405</EventRecordID>
    <Correlation />
    <Execution ProcessID="0" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>XXXXXXX</Computer>
    <Security />
  </System>
  <UserData>
    <EventXML xmlns="Perflib">
      <param1>BITS</param1>
      <param2>C:\Windows\System32\bitsperf.dll</param2>
      <binaryDataSize>4</binaryDataSize>
      <binaryData>02000000</binaryData>
    </EventXML>
  </UserData>
</Event>

Log Name:      Application
Source:        Microsoft-Windows-Perflib
Date:          4/24/2015 10:53:43 AM
Event ID:      1023
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      XXXXX
Description:
Windows cannot load the extensible counter DLL Backup Exec. The first four bytes (DWORD) of the Data section contains the Windows error code.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
    <EventID Qualifiers="49152">1023</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-04-24T17:53:43.000000000Z" />
    <EventRecordID>3403</EventRecordID>
    <Correlation />
    <Execution ProcessID="0" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>XXXXXX</Computer>
    <Security />
  </System>
  <UserData>
    <EventXML xmlns="Perflib">
      <param1>Backup Exec</param1>
      <binaryDataSize>4</binaryDataSize>
      <binaryData>7E000000</binaryData>
    </EventXML>
  </UserData>
</Event>

Deduplication & Optimized duplication - multiple jobs for one server?

$
0
0
Oui, j'ai besoin d'une solution

Hi There,

I have a number of servers that i want to back up with BE2014, using the deduplication add-on. I will also be adding a stage to duplicate certain backups to an offsite, shared dedupe store.

I have a query about using multiple jobs for the same backup server.

Say i create job A,(consisting of Full and Incrementals),  and this job backs up one server to the primary dedupe store, and this will be duplicated to another dedupe store offsite

Then, i go along and create job B, with different retention settings (also backing up the same server, to the same dedupe store), and this is also duplicated to an offsite dedupe store.

Will job B go off and create a completely new batch of files (eg full backups) on the primary dedupe store, or will everything be completely deduped and no duplicate of files created (, because it can 'see' all of the full backups that were previously created via Job A?

If new backups will be created, will the same thing happen in the offsite dedupe store, and the duplicate jobs will also create new backups in the offsite store?

Basically, i will be creating one job for daily/weekly, and the weekly will be duplicated offsite. I will be creating a seperate montly job, with suitable retention, and this will have a duplicate to send it offsite. I will also be creating seperate quarterly and annual jobs with suitable retention for them also.

Is this a good approach, and will this all work together and deduplicate well?

Thanks in advance for your advice

Why we set robot path to one of the drive

$
0
0
Oui, j'ai besoin d'une solution

Why we set robot path to one of the drive in tape library and how we can check on which drive that path is set from master server cmd on AIX. And what will happen if that drive gets faulty and what we needs to do. Do we need to reset that path to other working drive meanwhile it gets replaced and do we need to configure the drive again or anything else we can do.
Robot is under master server control on Aix.

Looking for steps of command for Aix platform rather than gui interface.

0

BE2014 Deduplication - how many concurrent jobs do you run, what spec is your backup server?

$
0
0
Oui, j'ai besoin d'une solution

Hi

I have a couple of quick questions about the dedupe store concurrent jobs setting (for a standard dedupe store running on the media server, with no external devices and no OSTs).

What do people normally leave the setting at? It's looking like i'm going to be running quite a few jobs at once, but i'm hesitant to perform more than 3 jobs at once as it might slow all the jobs down to much

I have BE running as a VM, with 2 x dual core processors, with 32GB of RAM. If i double this to 8 cores in total, will it help me run more concurrent jobs efficiently?

Thanks in advance for your opinions

Viewing all 2857 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>