How do I reset a lost administrative password?

By default the first user’s account is an administrative account, so if the UI is prompting you for a password it’s probably that person’s user password. If the user doesn’t remember their password you need to reset it. To do this you need to boot into recovery mode.
Boot up the machine, and after the BIOS screen, hold down the left Shift key. You will then be prompted by a menu that looks something like this:
enter image description here
I’ve noticed on some systems that timing when to hit the left Shift key can be tricky, sometimes I miss it and need to try it again.
Hit the down arrow until you select the 2nd entry from the top (the one with the recovery mode in the description) and then hit Enter.
Now you should see this menu:
enter image description here
Using the arrow keys scroll down to either root or netroot (doesn’t matter in this case) and then hit Enter.
You should now see a root prompt, something like this:
At this stage you should have a read-only filesystem. You have to remount it with write permissions:
mount -rw -o remount /
Now we can set the user’s password with the passwd command. (In this example I will use jorge as the example, you need to substitute whatever the user’s username is):
root@ubuntu:~# passwd jorge
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Type in what you want the new password to be at the prompt. After it’s successful reboot the machine and the user will be able to log in with their new password.

PostgreSQL 8.4 Point-in-time Recovery (using WAL)

PostgreSQL is an opensource and free-to-use Object-Relational Database Management System (ORBMS) and is controlled by a group of developers and development companies. PostgeSQL has three modes of backup, i.e. dumpdump all and Write Ahead Log (WAL). I would not focus here on the difference between the three, but will introduce WAL of postgresql. Write Ahead Log is a section of Postgresql where all the psql commands that have been executed are stored. Creating a copy of these logs and executing them again will restore any crashed database. This method is called Point-in-Time recovery, where WAL’s from a certain point in time are re-run to restore (crashed or lost) databases. The tutorial will walk you through the important steps for both backup and recovery of postgresql database using this method.
In Point-in-Time recovery we need to first get a backup of segment files of WAL and then run the server in recovery mode. For the backup part we will use a simple linux copy command for copying the files to a folder outside the data directory of postgresql.
Now In order to move Postgres recovery mode, we have to alter its configuration file —postgresql.conf.
I worked on different versions of postgres; sometimes with the same version with different installation options. The location of different directories is heavily dependent on the individual setup.
I found mine at /etc/postgresql/8.4/main and on one of the other version of postgresql it was located at /opt/Postgres/8.4/data. In general, you can use locate or find commands to find the configuration files and then follow the rest of the tutorial.
Now, we have to edit postgresql.conf (through sudo, if needed). Go to the archiving portion of it and uncomment a lines. You will also need to make some changes. At the end, it should look something like this:
# - Archiving -
archive_mode = on       
# allows archiving to be done
# (change requires restart)
archive_command = 'cp %p /mnt/backup/%f'
# command to use to archive a logfile segment;
archive_timeout = 30         
# force a logfile segment switch after this
# number of seconds; 0 disables
Note: /mnt/backup is the directory i want the backup to be stored. The two Variables %p and %fwill be replaced automatically by the server when backup is performed: %p would be the directory of data cluster and %f would be the name of backup file.
We are done with the prerequisites and now in order start the backup, we have to switch to psql console and commit following commands.
# SELECT pg_start_backup('MY BACKUP_1')
it will will show something like this
(1 row)
The backup of WAL segment files is started and the files from pg_xlog are been copied to /mnt/backup/
In order to stop the backup:
# SELECT pg_stop_backup()
(1 row)
This shows the backup process is successfully closed.
Important Note: The directory /mnt/backup should have permission for postgres user to read and write (owner). Otherwise, you get an error like this at runtime (as seen in the log file):
2010-08-11 21:54:46 PKST LOG:  archive command failed with exit code 1
2010-08-11 21:54:46 PKST DETAIL: 
The failed archive command was: cp pg_xlog/000000010000000000000000 /mnt/backup/000000010000000000000000
cp: cannot create regular file `/mnt/backup/000000010000000000000000': Permission denied
So now comes the recovery part, much easier than i thought but a little bit tricky. In order to continue with Point-in-Time recovery we first create a file named ‘Recovery.conf’ and place it in the data cluster, which in my case was /var/lib/postgresql/8.4/main
So what does the recovery file contains. Actually it has just another copy command for copying back files to data cluster from backup directory(this will be opposite to the previous backup command).
restore_command = 'cp /mnt/backup/%f "%p"'
When the recovery gets completed, postgresql will rename ‘recovery.conf’ to ‘recovery.done’.
Now stop the server by /init.d/postgresql-8.4 stop. We presume that there was a database crash down all the information in the data directory is lost. After which we will let postgresql to recover the files from the backup directory.
The next step is to delete files from the pg_xlog directory.
Now start the server by writing in terminal /init.d/postgresql-8.4 start which will automatically trigger a recovery. So what happened is that postgresql server found arecovery.conf thus shifted to its recovery mode.
You can assure proper recovery by checking the name of the recovery.conf file and by looking at the log of postgresql server. The log would Look Like this:
2010-08-12 12:29:06 PKST LOG:  database system is shut down
2010-08-12 12:29:48 PKST LOG:  database system was shut down at 2010-08-12 12:29:06 PKST
2010-08-12 12:29:48 PKST LOG:  creating missing WAL directory "pg_xlog/archive_status
2010-08-12 12:29:48 PKST LOG:  starting archive recovery
2010-08-12 12:29:48 PKST LOG:  restore_command = 'cp /mnt/backup/%f %p'
2010-08-12 12:29:48 PKST LOG:  automatic recovery in progress
2010-08-12 12:29:48 PKST LOG:  record with zero length at 0/A000064
2010-08-12 12:29:48 PKST LOG:  redo is not required
2010-08-12 12:29:48 PKST LOG:  selected new timeline ID: 2
2010-08-12 12:29:49 PKST LOG:  archive recovery complete
2010-08-12 12:29:49 PKST LOG:  autovacuum launcher started
2010-08-12 12:29:49 PKST LOG:  database system is ready to accept connections

How to Install MySQL 5.6 on Ubuntu 13.10 x64 / Debian Linux

This post should help you to understand how to Install MySQL 5.6 on Ubuntu 13.10 x64 / Debian Linux. As I always say, its very simple process to install it. First head over to MySQL Downloads page and get the 64 bit Debian package for MySQL. Current version out there for general available release is [Note this may change as and when they release minors, so pick what ever is latest from this site at the time of your  installation and modify the commands according to your filename]: mysql-5.6.15-debian6.0-x86_64.deb. Make sure you select Debian Linux from the select platform drop down. Make a note of the md5sum to verify that its legit file after we download it. In our case its 409a79231afb46473f8280a108c9dfdd which is right below download button as shown in the picture below.

MySQL Downloads Page
Once you click download you will be headed to this page and then copy the link on “No thanks, Just start my download”:
mysql56_download_linkNow you have the download link in your clipboard/memory, lets go into the terminal of the machine where we will be installing MySQL. Once you are in the terminal window download the package with wget command as shown below:
sudo wget
download_mysql_wgetNow you should have successfully downloaded the Debian package from MySQL website. Calculate the checksum for the file that is downloaded using md5sum command as shown below:
md5sum mysql-5.6.15-debian6.0-x86_64.deb
mysql 5.6 md5 checksumChecksum that is on the website should match the checksum for the file that we calculated. This check is optional but I recommend it. So far we completed downloading the package and verified its the right file. Before we install MySQL we need to install libaio1 package for MySQL to get installed properly, use the following command to get that package installed:
sudo apt-get install libaio1
Mysql 5.6 libaio1 packageNow add mysql user as MySQL database engine uses mysql user to run itself. Use the following command to add the user:
sudo useradd mysql
mysql system userNow that we have resolved all the dependencies we can install MySQL package, to install MySQL 5.6 package just run the dpkg -i command as shown below:
sudo dpkg -i mysql-5.6.15-debian6.0-x86_64.deb  --user=mysql --no-defaults
MySQL 5.6 debian package installWe have now successfully installed MySQL Server / Client utilities on this server. We need to do few more steps before we can start using the database engine. First we need to populate the database with default data directories and file. To do that simply run the following command:
sudo /opt/mysql/server-5.6/scripts/mysql_install_db
mysql_install_db_1mysql_install_db_2We need to change ownership of this installation to mysql. The default MySQL installation path for this version on Ubuntu is /opt. The following command will update the ownership of this MySQL installation directory:
sudo chown -R mysql.mysql /opt/mysql/
MySQL directory ownership change
Before we start our database server lets add this new installation bin path to system path. This will allow to access mysql commands from any directory with in this server. Edit “/etc/environment” file and add this new path there.
sudo vi /etc/environment
and append this new path at the end in this file.
environment variable mysqlNow that you have added path to environment, source it to get that in effect.
source /etc/environment
source environmentNow in order to make mysql database server run on system startup or reboots we need to add them to system run levels, so copy the server startup script from our support folder located at /opt/mysql/server-5.6/support-files/mysql.server to /etc/init.d/ folder and update-rc using the following commands:
sudo cp /opt/mysql/server-5.6/support-files/mysql.server /etc/init.d/
sudo update
-rc.d mysql.server defaults
sudo rm /etc/mysql -R

update rc mysqlWe are now at home stretch here, we can now start our MySQL Database server, using the following command:
sudo /etc/init.d/mysql.server start
mysql 5.6 startWe have successfully started our database server now. From here we can either set root password and be done configuring database instance or we can run secure installation to finish our configuration. For production systems I recommend doing secure installation. To do that simply run :
perl /opt/mysql/server-5.6/bin/mysql_secure_installation
mysql_secure_installationFor prod systems create a host specific user with required privileges for your application database. If both web-server and database server are on the same server then I would say don’t allow any remote connections unless you are using an external client to connect to database like Navicat , SQLYog or MySQL Workbench from your home network.
Lets test and see if we can connect to this database server now that we have successfully installed and configured.
mysql -uroot -p
use the password that you have setup when you ran mysql_secure_installation above
mysql56 connectI am going to create a super user that can connect from anywhere for testing remote connections to this database server, to do that simple run the following commands inside database engine:
CREATE USER 'superadmin'@'%' IDENTIFIED BY 'opensourcedbmsadmin';
*.* TO 'superadmin'@'%' WITH GRANT OPTION;
super user mysqlNow you can connect with this user to your server from anywhere in the world as long as this ip/hostname is publicly accessible.
mysql workbenchmysql server statusmysql 56 query

Automatic Starting Tomcat on ubuntu

Tomcat requires JAVA_HOME variable.  The best way to do this is to set it in your .bashrc file.
The better method is editing your .bashrc file and adding the following line there,  You will have to logout of the shell for the change to take effect.
vi ~/.bashrc
Add the following line:
export JAVA_HOME=/usr/lib/jvm/java-6-sun
At this point you can start tomcat by just executing the script in the tomcat/bin folder.
Automatic Starting
To make tomcat automatically start when we boot up the computer, you can add a script to make it auto-start and shutdown.
sudo vi /etc/init.d/tomcat
Now paste the following content:
    # Tomcat auto-start
# description: Auto-starts tomcat
# processname: tomcat
# pidfile: /var/run/

export JAVA_HOME=/usr/lib/jvm/java-6-sun

case $1 in
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
exit 0
You’ll need to make the script executable by running the chmod command:
sudo chmod 755 /etc/init.d/tomcat
The last step is actually linking this script to the startup folders with a symbolic link. Execute these two commands and we should be on our way.
sudo ln -s /etc/init.d/tomcat /etc/rc1.d/K99tomcat
sudo ln -s /etc/init.d/tomcat /etc/rc2.d/S99tomcat
Tomcat should now be fully installed.
Restart the system and tomcat start automatically.

SQL Developer 3.1 Data Pump Wizards (expdp, impdp)

SQL Developer 3.1 includes a neat GUI interface for Data Pump, allowing you to do on-the-fly exports and imports without having to remember the expdp/impdp command line syntax. This article gives an overview of these wizards.

Related articles.

General Points

The data pump wizards are accessible from the DBA browser (View > DBA).
Menu DBA
If no connections are available, click the “+” icon and select the appropriate connection from the drop-down list and click the “OK” button. In this case I will be using the “system” connection.
DBA Connections
Expanding the connection node in the tree lists a number of functions, including “Data Pump”. Expanding the “Data Pump” node displays “Export Jobs” and “Import Jobs” nodes, which can be used to monitor running data pump jobs.
DBA Data Pump
This tree will be the starting point for the operations listed in the following sections.

Exports (expdp)

Right-click on either the “Data Pump” or “Export Jobs” tree node and select the “Data Pump Export Wizard…” menu option.
Export Menu
Check the connection details are correct and select the type of export you want to perform, then click the “Next” button. In this case I will do a simple schema export.
Export Source
The screens that follow will vary depending on the type of export you perform. For the schema export, we must select the schema to be exported. To do this, highlight the schema of interest in the left-hand “Available” pane, then click the “>” button to move it to the right-hand “Selected” pane. When you are happy with your selection, click the “Next” button.
Export Schema
If you have any specific include/exclude filters, add them and click the “Next” button.
Export Filter
If you want to apply a WHERE clause to any or all of the tables, enter the details in the “Table Data” screen, then click the “Next” button.
Export Table Data
The “Options” screen allows you to increase the parallelism of the export, name the logfile and control the read-consistent point in time if necessary. When you have selected your specific options, click the “Next” button.
Export Options
Enter a suitable dump file name by double-clicking on the default name and choose the appropriate action should the file already exist, then click the “Next” button.
Export Output Files
If you want to schedule the export to run at a later time, or on regular intervals, enter the details here. The default is to run the job immediately. Click the “Next” button.
Export Job Schedule
Check the summary information is correct. If you need to keep a copy of the job you have just defined, click on the “PL/SQL” tab to see the code. When you are ready, click the “Finish” button.
Export Summary
Once the job is initiated, it can be seen under the “Export Jobs” node of the tree, where it can be monitored.
Export Jobs
As normal, the dump file and log file are located in the specified directory on the database server.

Imports (impdp)

In this section we will import the SCOTT schema, exported in the previous section, into a new user. The new user was created as follows.
CREATE USER scott_copy IDENTIFIED BY scott_copy

Right-click on either the “Data Pump” or “Import Jobs” tree node and select the “Data Pump Import Wizard…” menu option.
Import Menu
Enter the type of import you want to do and the name of the dump file that is the source of the data, then click the “Next” button.
Import Type
The screens that follow will vary depending on the type of import you perform. Wait for the utility to interrogate the file, then select the schema of choice. If you need any specific include/exclude filters, they can be added in the “Include Exclude Filter” tab. Click the “Next” button.
Import Filter
To load the data into a new schema, we need to add a REMAP_SCHEMA entry. Once this is done, click the “Next” button.
Import Remapping
The “Options” screen allows you to increase the parallelism of the import, name the logfile and control the action if tables or unusable indexes exist. When you have selected your specific options, click the “Next” button.
Import Options
If you want to schedule the import to run at a later time, or on regular intervals, enter the details here. The default is to run the job immediately. Click the “Next” button.
Import Job Schedule
Check the summary information is correct. If you need to keep a copy of the job you have just defined, click on the “PL/SQL” tab to see the code. When you are ready, click the “Finish” button.
Import Summary
Once the job is initiated, it can be seen under the “Import Jobs” node of the tree, where it can be monitored.
Import Jobs
As normal, the log file is located in the specified directory on the database server.

Once the import is complete, we can see the tables have been imported into the new schema.

SQL> SELECT table_name FROM dba_tables WHERE owner ='SCOTT_COPY';


4 rows selected.

For more information see:
Hope this helps

How to Increase Table Space in Oracle

1.  For this first we have to increase the table space of SYSTEM

2.  Login Oracle webpage –>  Ubuntu search pad —>  type oracle  —->  Get started with oracle 11g Express Edition —>  select it

3.  User Name   —>   system  and pwd  —>   ****

4.  select the storage

5.  Check the system table space,  how much free space is there.

Tablespace  = SYSTEM
FreeSpace   = 46
Used Space  = 555
Parent used =  mostly fully
Maximum     = 600

6.  So now we have to increase the free space.  For that in the sql developer login with sysdba role

7.  Connection Name :  sys  and username : sys  and password  :  ***  and Connection type :  Default  Role : SYSDBA

8.  Execute  :  ALTER DATABASE DATAFILE ‘/u01/app/oracle/oradata/XE/system.dbf’ RESIZE 1024M;

9.  The O/P is database datafile ‘/U01/APP/ORACLE/ORADATA/XE/SYSTEM.DBF’ altered.

10. Now check the free space,  it’s increased from 46 to 470.

Number to Word converter Java Code

import java.text.DecimalFormat;

public class EnglishNumberToWords {

  private static final String[] tensNames = {
    ” ten”,
    ” twenty”,
    ” thirty”,
    ” forty”,
    ” fifty”,
    ” sixty”,
    ” seventy”,
    ” eighty”,
    ” ninety”

  private static final String[] numNames = {
    ” one”,
    ” two”,
    ” three”,
    ” four”,
    ” five”,
    ” six”,
    ” seven”,
    ” eight”,
    ” nine”,
    ” ten”,
    ” eleven”,
    ” twelve”,
    ” thirteen”,
    ” fourteen”,
    ” fifteen”,
    ” sixteen”,
    ” seventeen”,
    ” eighteen”,
    ” nineteen”

  private static String convertLessThanOneThousand(int number) {
    String soFar;

    if (number % 100 < 20){
      soFar = numNames[number % 100];
      number /= 100;
    else {
      soFar = numNames[number % 10];
      number /= 10;

      soFar = tensNames[number % 10] + soFar;
      number /= 10;
    if (number == 0) return soFar;
    return numNames[number] + ” hundred” + soFar;

  public static String convert(long number) {
    // 0 to 999 999 999 999
    if (number == 0) { return “zero”; }

    String snumber = Long.toString(number);

    // pad with “0”
    String mask = “000000000000”;
    DecimalFormat df = new DecimalFormat(mask);
    snumber = df.format(number);

    // XXXnnnnnnnnn
    int billions = Integer.parseInt(snumber.substring(0,3));
    // nnnXXXnnnnnn
    int millions  = Integer.parseInt(snumber.substring(3,6));
    // nnnnnnXXXnnn
    int hundredThousands = Integer.parseInt(snumber.substring(6,9));
    // nnnnnnnnnXXX
    int thousands = Integer.parseInt(snumber.substring(9,12));   

    String tradBillions;
    switch (billions) {
    case 0:
      tradBillions = “”;
    case 1 :
      tradBillions = convertLessThanOneThousand(billions)
      + ” billion “;
    default :
      tradBillions = convertLessThanOneThousand(billions)
      + ” billion “;
    String result =  tradBillions;

    String tradMillions;
    switch (millions) {
    case 0:
      tradMillions = “”;
    case 1 :
      tradMillions = convertLessThanOneThousand(millions)
      + ” million “;
    default :
      tradMillions = convertLessThanOneThousand(millions)
      + ” million “;
    result =  result + tradMillions;

    String tradHundredThousands;
    switch (hundredThousands) {
    case 0:
      tradHundredThousands = “”;
    case 1 :
      tradHundredThousands = “one thousand “;
    default :
      tradHundredThousands = convertLessThanOneThousand(hundredThousands)
      + ” thousand “;
    result =  result + tradHundredThousands;

    String tradThousand;
    tradThousand = convertLessThanOneThousand(thousands);
    result =  result + tradThousand;

    // remove extra spaces!
    return result.replaceAll(“^\\s+”, “”).replaceAll(“\\b\\s{2,}\\b”, ” “);

   * testing
   * @param args
  public static void main(String[] args) {
    System.out.println(“*** ” + EnglishNumberToWords.convert(0));
    System.out.println(“*** ” + EnglishNumberToWords.convert(1));
    System.out.println(“*** ” + EnglishNumberToWords.convert(16));
    System.out.println(“*** ” + EnglishNumberToWords.convert(100));
    System.out.println(“*** ” + EnglishNumberToWords.convert(118));
    System.out.println(“*** ” + EnglishNumberToWords.convert(200));
    System.out.println(“*** ” + EnglishNumberToWords.convert(219));
    System.out.println(“*** ” + EnglishNumberToWords.convert(800));
    System.out.println(“*** ” + EnglishNumberToWords.convert(801));
    System.out.println(“*** ” + EnglishNumberToWords.convert(1316));
    System.out.println(“*** ” + EnglishNumberToWords.convert(1000000));
    System.out.println(“*** ” + EnglishNumberToWords.convert(2000000));
    System.out.println(“*** ” + EnglishNumberToWords.convert(3000200));
    System.out.println(“*** ” + EnglishNumberToWords.convert(700000));
    System.out.println(“*** ” + EnglishNumberToWords.convert(9000000));
    System.out.println(“*** ” + EnglishNumberToWords.convert(9001000));
    System.out.println(“*** ” + EnglishNumberToWords.convert(123456789));
    System.out.println(“*** ” + EnglishNumberToWords.convert(2147483647));
    System.out.println(“*** ” + EnglishNumberToWords.convert(3000000010L));

     *** zero
     *** one
     *** sixteen
     *** one hundred
     *** one hundred eighteen
     *** two hundred
     *** two hundred nineteen
     *** eight hundred
     *** eight hundred one
     *** one thousand three hundred sixteen
     *** one million
     *** two millions
     *** three millions two hundred
     *** seven hundred thousand
     *** nine millions
     *** nine millions one thousand
     *** one hundred twenty three millions four hundred
     **      fifty six thousand seven hundred eighty nine
     *** two billion one hundred forty seven millions
     **      four hundred eighty three thousand six hundred forty seven
     *** three billion ten

How to stop or kill data pump jobs in Oracle

In my never ending frustrations with using Oracle (seriously, I loathe Oracle above all else), I could not find an absolute answer on how to stop or kill or delete data pump jobs being executed. I found the answer via Metalink, and I’m going to share it because I feel these answers should be easily accessible. It’s a two step process.
1. Get the list of datapump jobs:

SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

-- locate Data Pump jobs:

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'

The output might look something like this:

---------- -------------------- ----------- ----------- ----------- -----------------

There are two things needed to perform the kill:
With that information, we can now stop and kill the job:

SET serveroutput on
SET lines 100
-- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');

Check that the job has stopped:

SQL> SET lines 200
SQL> COL owner_name FORMAT a10;
SQL> COL job_name FORMAT a20
SQL> COL state FORMAT a11
SQL> COL operation LIKE state
SQL> COL job_mode LIKE state
SQL> -- locate Data Pump jobs:
SQL> SELECT owner_name, job_name, operation, job_mode,
2 state, attached_sessions
3 FROM dba_datapump_jobs
4 WHERE job_name NOT LIKE 'BIN$%'
5 ORDER BY 1,2;

no rows selected

How to delete/remove non executing datapump jobs?

Sometimes, we may get a requirement to delete datapump jobs which are stopped abruptly due to some reason. The following steps will actually help us to do that
1. First we need to identify which jobs are in NOT RUNNING status. For this, we need to use below query (basically we are getting this info from dba_datapump_jobs)
SET lines 200
SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
The above query will give the datapump jobs information and it will look like below
———- ——————- ——— ——— ———– ——–
In the above output, you can see state is showing as NOT RUNNING and those jobs need to be removed.
Note: Please note that jobs state will be showing as NOT RUNNING even if a user wantedly stopped it. So before taking any action, consult the user and get confirmed
2. we need to now identify the master tables which are created for these jobs. It can be done as follows
SELECT o.status, o.object_id, o.object_type,
       o.owner||’.’||object_name “OWNER.OBJECT”
  FROM dba_objects o, dba_datapump_jobs j
 WHERE o.owner=j.owner_name AND o.object_name=j.job_name
   AND j.job_name NOT LIKE ‘BIN$%’ ORDER BY 4,2;
——- ———- ———— ————————-
VALID        85283 TABLE        SCOTT.EXPDP_20051121
3. we need to now drop these master tables in order to cleanup the jobs
4. Re-run the query which is used in step 1 to check if still any jobs are showing up. If so, we need to stop the jobs once again using STOP_JOB parameter in expdp or DBMS_DATAPUMP.STOP_JOB package
Some imp points:
1. Datapump jobs that are not running doesn’t have any impact on currently executing ones.
2. When any datapump job (either export or import) is initiated, master and worker processes will be created.
3. When we terminate export datapump job, master and worker processes will get killed and it doesn’t lead to data courrption.
4. But when import datapump job is terminated, complete import might not have done as processes(master & worker)  will be killed.