Datapump

Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables
11g Enhancement 
· COMPRESSION
· Encryption Parameters
· ENCRYPTION and ENCRYPTION_PASSWORD
· ENCRYPTION_ALGORITHMENCRYPTION_MODE
· TRANSPORTABLE
· PARTITION_OPTIONS
· REUSE_DUMPFILES
· REMAP_TABLE
· DATA_OPTIONS
· SKIP_CONSTRAINT_ERRORSXML_CLOBS
· REMAP_DATA
· Miscellaneous Enhancements

COMPRESSION

The COMPRESSION parameter allows you to decide what, if anything, you wish to compress in your export. The syntax is shown below.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
The available options are:
· ALL: Both metadata and data are compressed.
· DATA_ONLY: Only data is compressed.
· METADATA_ONLY: Only metadata is compressed. This is the default setting.
· NONE: Nothing is compressed.
Here is an example of the COMPRESSION parameter being used.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  compression=all
The COMPATIBLE initialization parameter should be set to "11.0.0" or higher to use these options, except for the METADATA_ONLY option, which is available with a COMPATIBLE setting of "10.2".
Data compression requires the Advanced Compression Option option of Enterprise Edition, as described here.
https://www.blogger.com/null
Encryption Parameters
Data pump encryption is an Enterprise Edition feature, so the parameters described below are only relevant for Enterprise Edition installations. In addition, the COMPATIBLE initialisation parameter must be set to "11.0.0" or higher to use these features.
https://www.blogger.com/null
ENCRYPTION and ENCRYPTION_PASSWORD
The use of encryption is controlled by a combination of the ENCRYPTION or ENCRYPTION_PASSWORD parameters. The syntax for the ENCRYPTION parameter is shown below.
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}

The available options are:

· ALL: Both metadata and data are encrypted.
· DATA_ONLY: Only data is encrypted.
· ENCRYPTED_COLUMNS_ONLY: Only encrypted columns are written to the dump file in an encrypted format.
· METADATA_ONLY: Only metadata is encrypted.
· NONE: Nothing is encrypted.
If neither the ENCRYPTION or ENCRYPTION_PASSWORD parameters are set, it is assumed the required level of encryption is NONE. If only the ENCRYPTION_PASSWORD parameter is specified, it is assumed the required level of encryption is ALL. Here is an example of these parameters being used.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  encryption=all encryption_password=password
https://www.blogger.com/null
ENCRYPTION_ALGORITHM
The ENCRYPTION_ALGORITHM parameter specifies the encryption algorithm to be used during the export, with the default being "AES128". The syntax is shown below.
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
The ENCRYPTION_ALGORITHM parameter must be used in conjunction with the ENCRYPTION or ENCRYPTION_PASSWORD parameters, as shown below.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  encryption=all encryption_password=password encryption_algorithm=AES256
https://www.blogger.com/null
ENCRYPTION_MODE
The ENCRYPTION_MODE parameter specifies the type of security used during export and import operations. The syntax is shown below.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
The allowable values and their default settings are explained below:
· DUAL: This mode creates a dump file that can be imported using an Oracle Encryption Wallet, or the the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if theENCRYPTION_PASSWORD parameter is set and there is an open wallet.
· PASSWORD: This mode creates a dump file that can only be imported using the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there isn't an open wallet.
· TRANSPARENT: This mode creates an encrypted dump file using and open Oracle Encryption Wallet. If the ENCRYPTION_PASSWORD is specified while using this mode and error is produced. This is the default setting of only the ENCRYPTION parameter is set.
Wallet setup is described here.
The ENCRYPTION_MODE requires either the ENCRYPTION or ENCRYPTION_PASSWORD parameter to be specified.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  encryption=all encryption_password=password encryption_mode=password
https://www.blogger.com/null
TRANSPORTABLE
The TRANSPORTABLE parameter is similar to the TRANSPORT_TABLESPACES parameter available previously in that it only exports and imports metadata about a table, relying on you to manually transfer the relevent tablespace datafiles. The export operation lists the tablespaces that must be transfered. The syntax is shown below.
TRANSPORTABLE = {ALWAYS | NEVER}
The value ALWAYS turns on the transportable mode, while the default value of NEVER indicates this is a regular export/import.
The following restrictions apply during exports using the TRANSPORTABLE parameter:
· This parameter is only applicable during table-level exports.
· The user performing the operation must have the EXP_FULL_DATABASE privilege.
· Tablespaces containing the source objects must be read-only.
· The COMPATIBLE initialization parameter must be set to 11.0.0 or higher.
· The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.
Some extra restictions apply during import operations:
· The NETWORK_LINK parameter must be specified during the import operation. This parameter is set to a valid database link to the source schema.
· The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges.
· The TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.
Examples of the export and import operations are shown below.
expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  transportable=ALWAYS

impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log

  transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'
https://www.blogger.com/null
PARTITION_OPTIONS
The PARTITION_OPTIONS parameter determines how partitions will be handled during export and import operations. The syntax is shown below.
PARTITION_OPTIONS={none | departition | merge}
The allowable values are:
· NONE: The partitions are created exactly as they were on the system the export was taken from.
· DEPARTITION: Each partition and sub-partition is created as a separate table, named using a combination of the table and (sub-)partition name.
· MERGE: Combines all partitions into a single table.
The NONE and MERGE options are not possible if the export was done using the TRANSPORTABLE parameter with a partition or subpartition filter. If there are any grants on objects being departitioned, an error message is generated and the objects are not loaded.
expdp test/test directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log tables=test.tab1
  partition_options=merge
https://www.blogger.com/null
REUSE_DUMPFILES
The REUSE_DUMPFILES parameter can be used to prevent errors being issued if the export attempts to write to a dump file that already exists.
REUSE_DUMPFILES={Y | N}
When set to "Y", any existing dumpfiles will be overwritten. When the default values of "N" is used, an error is issued if the dump file already exists.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  reuse_dumpfiles=y
https://www.blogger.com/null
REMAP_TABLE
This parameter allows a table to be renamed during the import operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports. The syntax is shown below.
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
An example is shown below.
impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
  remap_table=TEST.TAB1:TAB2
Existing tables are not renamed, only tables created by the import.
https://www.blogger.com/null
DATA_OPTIONS
https://www.blogger.com/null
SKIP_CONSTRAINT_ERRORS
During import operations using the external table acces method, setting the DATA_OPTIONS parameter to SKIP_CONSTRAINT_ERRORS allows load operations to continue through non-deferred constraint violations, with any violations logged for future reference. Without this, the default action would be to roll back the whole operation. The syntax is shown below.
DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
An example is shown below.
impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
  data_options=SKIP_CONSTRAINT_ERRORS
This parameter has no impact on deferred constraints, which still cause the operation to be rolled back once a violation is detected. If the object being loaded has existing unique indexes or constraints, the APPEND hint will not be used, which may adversely affect performance.
https://www.blogger.com/null
XML_CLOBS
During an export, if XMLTYPE columns are currently stored as CLOBs, they will automatically be exported as uncompressed CLOBs. If on the other hand they are currently stored as any combination of object-relational, binary or CLOB formats, they will be exported in compressed format by default. Setting the DATA_OPTIONS parameter to XML_CLOBS specifies that all XMLTYPE columns should be exported as uncompressed CLOBs, regardless of the default action. The syntax is shown below.
DATA_OPTIONS=XML_CLOBS
An example is shown below.
expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  version=11.1 data_options=XML_CLOBS
Both the export and import must use the same XML schema and the job version must be set to 11.0.0 or higher.
https://www.blogger.com/null
REMAP_DATA
During export and import operations, the REMAP_DATA parameter allows you to associate a remap packaged function that will accept the column value as a parameter and return a modified version of the data. The syntax is shown below.
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
This can be used to mask sensitive data during export and import operations by replacing the original data with random alternatives. The mapping is done on a column-by-column basis, as shown below.
expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
  remap_data:tab1.col1:remap_pkg.remap_col1
  remap_data:tab1.col2:remap_pkg.remap_col2
The remapping function must return the same datatype as the source column and it must not perform a commit or rollback.
https://www.blogger.com/null
Miscellaneous Enhancements
Worker processes that have stopped due to certain errors will now have a one-time automatic restart. If the process stops a second time, it must be restarted manually.




12 C




· Upgrading to Oracle Database 12c - Transportable Database

NOLOGGING Option (DISABLE_ARCHIVE_LOGGING)
The TRANSFORM parameter of impdp has been extended to include a DISABLE_ARCHIVE_LOGGING option. The default setting of "N" has no affect on logging behaviour. Using a value "Y" reduces the logging associated with tables and indexes during the import by setting their logging attribute to NOLOGGING before the data is imported and resetting it to LOGGING once the operation is complete.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
The effect can be limited to a specific type of object (TABLE or INDEX) by appending the object type.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX

An example of its use is shown below.
$ impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
     remap_schema=scott:test transform=disable_archive_logging:y
The DISABLE_ARCHIVE_LOGGING option has no effect if the database is running in FORCE LOGGING mode.
LOGTIME Parameter
The LOGTIME parameter determines if timestamps should be included in the output messages from the expdp and impdp utilities.
https://www.blogger.com/nullhttps://www.blogger.com/nullLOGTIME=[NONE | STATUS | LOGFILE | ALL]
The allowable values are explained below.
· NONE : The default value, which indicates that no timestamps should be included in the output, making the output look similar to that of previous versions.
· STATUS : Timestamps are included in output to the console, but not in the associated log file.
· LOGFILE : Timestamps are included in output to the log file, but not in the associated console messages.
· ALL : Timestamps are included in output to the log file and console.
An example of the output is shown below.
$ expdp scott/tiger@pdb1 tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log logtime=all

Export: Release 12.1.0.1.0 - Production on Wed Nov 20 22:11:57 2013


Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options
20-NOV-13 22:12:09.312: Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/********@pdb1 tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log logtime=all 
20-NOV-13 22:12:13.602: Estimate in progress using BLOCKS method...
20-NOV-13 22:12:17.797: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
20-NOV-13 22:12:18.145: Total estimation using BLOCKS method: 64 KB
20-NOV-13 22:12:30.583: Processing object type TABLE_EXPORT/TABLE/TABLE
20-NOV-13 22:12:33.649: Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
20-NOV-13 22:12:37.744: Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
20-NOV-13 22:12:38.065: Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
20-NOV-13 22:12:38.723: Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
20-NOV-13 22:12:41.052: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
20-NOV-13 22:12:41.337: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
20-NOV-13 22:13:38.255: . . exported "SCOTT"."EMP"                                8.75 KB      14 rows
20-NOV-13 22:13:40.483: Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
20-NOV-13 22:13:40.507: ******************************************************************************
20-NOV-13 22:13:40.518: Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
20-NOV-13 22:13:40.545:   /home/oracle/emp.dmp
20-NOV-13 22:13:40.677: Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Wed Nov 20 22:13:40 2013 elapsed 0 00:01:36

$

Export View as Table
The VIEWS_AS_TABLES parameter allows Data Pump to export the specified views as if they were tables. The table structure matches the view columns, with the data being the rows returned by the query supporting the views.
VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], ...
To see it working, create a view.
CONN scott/tiger@pdb1

CREATE VIEW emp_v AS 

  SELECT * FROM emp;
Now export the view using the VIEWS_AS_TABLES parameter.
$ expdp scott/tiger views_as_tables=scott.emp_v directory=test_dir dumpfile=emp_v.dmp logfile=expdp_emp_v.log
By default expdp creates a temporary table as a copy of the view, but with no data, to provide a source of the metadata for the export. Alternatively to can specify a table with the appropriate structure. This probably only makes sense if you are using this functionality in a read-only database.
The are a number of restrictions relating to this parameter, which you can read about here.
Change Table Compression at Import
The TABLE_COMPRESSION_CLAUSE clause of the TRANSFORM parameter allows the table compression characteristics of the tables in an import to be altered on the fly.
TRANSFORM=TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]
The allowable values for the TABLE_COMPRESSION_CLAUSE include the following.
· NONE : The table compression clause is omitted, so the table takes on the compression characteristics of the tablespace.
· NOCOMPRESS : Disables table compression.
· COMPRESS : Enables basic table compression.
· ROW STORE COMPRESS BASIC : Same as COMPRESS.
· ROW STORE COMPRESS BASIC : Same as COMPRESS.
· ROW STORE COMPRESS ADVANCED : Enables advanced compression, also known as OLTP compression.
· COLUMN STORE COMPRESS FOR QUERY : Hybrid Columnar Compression (HCC) available in Exadata and ZFS storage appliances.
· COLUMN STORE COMPRESS FOR ARCHIVE : Hybrid Columnar Compression (HCC) available in Exadata and ZFS storage appliances.
Note. Compression clauses that contain whitespace must be enclosed by single or double quotes.
An example of its use is shown below.
$ impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
     remap_schema=scott:test transform=table_compression_clause:compress
Change Table LOB Storage at Import
The LOB_STORAGE clause of the TRANSFORM parameter allows the table compression characteristics of the tables in a non-transportable import to be altered on the fly.
TRANSFORM=LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]
The allowable values for the LOB_STORAGE clause include the following.
· SECUREFILE : The LOBS are stored as SecureFiles.
· BASICFILE : The LOBS are stored as BasicFiles.
· DEFAULT : The LOB storage is determined by the database default.
· NO_CHANGE : The LOB storage matches that of the source object.
An example of its use is shown below.
$ impdp system/Password1@pdb1 directory=test_dir dumpfile=lob_table.dmp logfile=impdp_lob_table.log \
     transform=lob_storage:securefile
Dumpfile Compression Options
As part of the Advanced Compression option, you can specify the COMPRESSION_ALGORITHM parameter to determine the level of compression of the export dumpfile. This is not related to table compression discussed previously.
COMPRESSION_ALGORITHM=[BASIC | LOW | MEDIUM | HIGH]
https://www.blogger.com/nullhttps://www.blogger.com/nullhttps://www.blogger.com/nullhttps://www.blogger.com/nullThe meanings of the available values are described below.
· BASIC : The same compression algorithm used in previous versions. Provides good compression, without severely impacting on performance.
· LOW : For use when reduced CPU utilisation is a priority over compression ratio.
· MEDIUM : The recommended option. Similar characteristics to BASIC, but uses a different algorithm.
· HIGH : Maximum available compression, but more CPU intensive.
An example of its use is shown below.
$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log \
        compression=all compression_algorithm=medium
Multitenant Option Support (CDB and PDB)
Oracle Database 12c introduced the multitenant option, allowing multiple pluggable databases (PDBs) to reside in a single container database (CDB). For the most part, using data pump against a PDB is indistinguishable from using it against a non-CDB instance.
Exports using the FULL option from 11.2.0.2 or higher can be imported into a clean PDB in the same way you would expect for a regular full import.
There are some minor restrictions, which you can read about here.
Audit Commands
Oracle 12c allows data pump jobs to be audited by creating an audit policy.
CREATE AUDIT POLICY policy_name 
  ACTIONS COMPONENT=DATAPUMP [EXPORT | IMPORT | ALL];
When this policy is applied to a user, their data pump jobs will appear in the audit trail. The following policy audits all data pump operations. The policy is applied to the SCOTT user.
CONN / AS SYSDBA
CREATE AUDIT POLICY audit_dp_all_policy ACTIONS COMPONENT=DATAPUMP ALL;
AUDIT POLICY audit_dp_all_policy BY scott;
Run the following data pump command.
$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log
Checking the audit trail shows the data pump job was audited.
-- Flush audit information to disk.
EXEC DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;

SET LINESIZE 200

COLUMN event_timestamp FORMAT A30
COLUMN dp_text_parameters1 FORMAT A30
COLUMN dp_boolean_parameters1 FORMAT A30

SELECT event_timestamp,

       dp_text_parameters1,
       dp_boolean_parameters1
FROM   unified_audit_trail
WHERE  audit_type = 'Datapump';

EVENT_TIMESTAMP         DP_TEXT_PARAMETERS1       DP_BOOLEAN_PARAMETERS1

------------------------------ ------------------------------ ------------------------------
14-DEC-13 09.47.40.098637 PM   MASTER TABLE:  "SCOTT"."SYS_EX MASTER_ONLY: FALSE, DATA_ONLY:
          PORT_TABLE_01" , JOB_TYPE: EXP  FALSE, METADATA_ONLY: FALSE,
          ORT, METADATA_JOB_MODE: TABLE_ DUMPFILE_PRESENT: TRUE, JOB_RE
          EXPORT, JOB VERSION: 12.1.0.0. STARTED: FALSE
          0, ACCESS METHOD: AUTOMATIC, D
          ATA OPTIONS: 0, DUMPER DIRECTO
          RY: NULL  REMOTE LINK: NULL, T
          ABLE EXISTS: NULL, PARTITION O
          PTIONS: NONE

SQL>

Encryption Password Enhancements
In previous versions, data pump encryption required the ENCRYPTION_PASSWORD parameter to be entered on the command line, making password snooping relatively easy.
In Oracle 12c, the ENCRYPTION_PWD_PROMPT parameter enables encryption without requiring the password to be entered as a command line parameter. Instead, the user is prompted for the password at runtime, with their response not echoed to the screen.
ENCRYPTION_PWD_PROMPT=[YES | NO]
An example of its use is shown below.
$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log \
        encryption_pwd_prompt=yes

Export: Release 12.1.0.1.0 - Production on Sat Dec 14 21:09:11 2013


Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Encryption Password:

Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=emp directory=test_dir
dumpfile=emp.dmp logfile=expdp_emp.log encryption_pwd_prompt=yes
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
. . exported "SCOTT"."EMP"                               8.765 KB      14 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /tmp/emp.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Sat Dec 14 21:09:55 2013 elapsed 0 00:00:41

$

Transportable Database
The TRANSPORTABLE option can now be combined with the FULL option to transport a whole database.
$ expdp system/Password1 full=Y transportable=always version=12 directory=TEMP_DIR \
   dumpfile=orcl.dmp logfile=expdporcl.log
This method can also be used to upgrade the database as described here.
Miscellaneous Enhancements
· Data Pump supports extended data types, provided the VERSION parameter is not set to a value prior to 12.1.
· LOB columns with a domain index can now take advantage of the direct path load method.
https://www.blogger.com/nullhttps://www.blogger.com/nullhttps://www.blogger.com/nullhttps://www.blogger.com/nullhttps://www.blogger.com/null

1 comment: