diff --git a/LICENSE b/LICENSE
index f9ca72d..89b8259 100755
--- a/LICENSE
+++ b/LICENSE
@@ -1,6 +1,6 @@
MIT License
-Copyright (c) 2018 Xuanyi Dong
+Copyright (c) 2019 Xuanyi Dong
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
diff --git a/README.md b/README.md
index 5ad8fe4..0a46256 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,11 @@
-# Searching for A Robust Neural Architecture in Four GPU Hours
+## Searching for A Robust Neural Architecture in Four GPU Hours
We propose A Gradient-based neural architecture search approach using Differentiable Architecture Sampler (GDAS).
-## Requirements
+
+Figure-1. We utilize a DAG to represent the search space of a neural cell. Different operations (colored arrows) transform one node (square) to its intermediate features (little circles). Meanwhile, each node is the sum of the intermediate features transformed from the previous nodes. As indicated by the solid connections, the neural cell in the proposed GDAS is a sampled sub-graph of this DAG. Specifically, among the intermediate features between every two nodes, GDAS samples one feature in a differentiable way.
+
+### Requirements
- PyTorch 1.0.1
- Python 3.6
- opencv
@@ -10,7 +13,7 @@ We propose A Gradient-based neural architecture search approach using Differenti
conda install pytorch torchvision cuda100 -c pytorch
```
-## Usages
+### Usages
Train the searched CNN on CIFAR
```
@@ -41,10 +44,14 @@ CUDA_VISIBLE_DEVICES=0 bash ./scripts-rnn/train-WT2.sh DARTS_V2
CUDA_VISIBLE_DEVICES=0 bash ./scripts-rnn/train-WT2.sh GDAS
```
-## Training Logs
+### Training Logs
Some training logs can be found in `./data/logs/`, and some pre-trained models can be found in [Google Driver](https://drive.google.com/open?id=1Ofhc49xC1PLIX4O708gJZ1ugzz4td_RJ).
-## Citation
+### Experimental Results
+
+Figure 2. Top-1 and top-5 errors on ImageNet.
+
+### Citation
```
@inproceedings{dong2019search,
title={Searching for A Robust Neural Architecture in Four GPU Hours},
diff --git a/data/GDAS.png b/data/GDAS.png
new file mode 100755
index 0000000..be8f026
Binary files /dev/null and b/data/GDAS.png differ
diff --git a/data/decompress.py b/data/decompress.py
index 9811dd5..032cb0e 100644
--- a/data/decompress.py
+++ b/data/decompress.py
@@ -16,7 +16,7 @@ def execute(cmds, idx, num):
def command(prefix, cmd):
#print ('{:}{:}'.format(prefix, cmd))
#if execute: os.system(cmd)
- xcmd = '(echo {:}; {:}; sleep 0.1s)'.format(prefix, cmd)
+ xcmd = '(echo {:} $(date +\"%Y-%h-%d--%T\") \"PID:\"$$; {:}; sleep 0.1s)'.format(prefix, cmd)
return xcmd
diff --git a/data/imagenet-results.png b/data/imagenet-results.png
new file mode 100755
index 0000000..6ce7c68
Binary files /dev/null and b/data/imagenet-results.png differ
diff --git a/scripts-cluster/job-script.sh b/scripts-cluster/job-script.sh
index 2480d0a..79b40ac 100644
--- a/scripts-cluster/job-script.sh
+++ b/scripts-cluster/job-script.sh
@@ -19,14 +19,16 @@ else
fi
echo "CHECK-DATA-DIR DONE"
+PID=$$
# config python
PYTHON_ENV=py36_pytorch1.0_env0.1.3.tar.gz
wget -e "http_proxy=cp01-sys-hic-gpu-02.cp01:8888" http://cp01-sys-hic-gpu-02.cp01/HGCP_DEMO/$PYTHON_ENV > screen.log 2>&1
tar xzf $PYTHON_ENV
-echo "JOB-PWD : " `pwd`
-echo "JOB-files : " `ls`
+echo "JOB-PID : "${PID}
+echo "JOB-PWD : "$(pwd)
+echo "JOB-files : "$(ls)
echo "JOB-CUDA_VISIBLE_DEVICES: " ${CUDA_VISIBLE_DEVICES}
./env/bin/python --version
diff --git a/scripts-cnn/train-imagenet.sh b/scripts-cnn/train-imagenet.sh
index d182060..8a1e05a 100644
--- a/scripts-cnn/train-imagenet.sh
+++ b/scripts-cnn/train-imagenet.sh
@@ -18,6 +18,7 @@ layers=$3
SAVED=./output/NAS-CNN/${arch}-${dataset}-C${channels}-L${layers}-E250
PY_C="./env/bin/python"
+#PY_C="$CONDA_PYTHON_EXE"
if [ ! -f ${PY_C} ]; then
echo "Local Run with Python: "`which python`
@@ -27,12 +28,23 @@ else
echo "Unzip ILSVRC2012"
tar --version
#tar xf ./hadoop-data/ILSVRC2012.tar -C ${TORCH_HOME}
- #${PY_C} ./data/decompress.py ./hadoop-data/ILSVRC2012-TAR ./data/data/ILSVRC2012 tar > ./data/data/get_imagenet.sh
- ${PY_C} ./data/decompress.py ./hadoop-data/ILSVRC2012-ZIP ./data/data/ILSVRC2012 zip > ./data/data/get_imagenet.sh
- bash ./data/data/get_imagenet.sh
+ commands="./data/data/get_imagenet.sh"
+ ${PY_C} ./data/decompress.py ./hadoop-data/ILSVRC2012-TAR ./data/data/ILSVRC2012 tar > ${commands}
+ #${PY_C} ./data/decompress.py ./hadoop-data/ILSVRC2012-ZIP ./data/data/ILSVRC2012 zip > ./data/data/get_imagenet.sh
+ #bash ./data/data/get_imagenet.sh
+ count=0
+ while read -r line; do
+ temp_file="./data/data/TEMP-${count}.sh"
+ echo "${line}" > ${temp_file}
+ bash ${temp_file}
+ count=$((count+1))
+ done < "${commands}"
echo "Unzip ILSVRC2012 done"
fi
+exit 1
+
+
${PY_C} --version
${PY_C} ./exps-cnn/train_base.py \