Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preprocessing Issues #2

Open
msharara1998 opened this issue Feb 1, 2023 · 8 comments
Open

Preprocessing Issues #2

msharara1998 opened this issue Feb 1, 2023 · 8 comments

Comments

@msharara1998
Copy link

msharara1998 commented Feb 1, 2023

Hello,
1)You mentioned in the paper that you've calculated the z-score of each feature. However, upon inspecting the dataset, I found that no feature has a value greater than one. To my knowledge, the z-score is calculated as:

z = (x-E(x)) / std(x)

Have you standardized the data using the above z-score, or normalized it by dividing each column's values by the maximum value?

  1. Concerning the created_at feature, how did you normalize it to a value between 0 and 1? I did not find in the paper information about this specific preprocessing.

It would be easier to share the user_name feature or at least the user ID, for easier reproducibility.

  1. I guess that I will have issues reproducing the graph-based features. My concerns are mainly how to preprocess a data point (suppose I trained a model on your dataset and want to predict on a user) so that I end up with exactly the same processed data point as the dataset was processed.

Several authors who released public datasets have shared the user-ID. I kindly request to share with me in private the account ids or usernames via my email ([email protected]). If you really cannot share it, please provide me with preprocessing code for the entire dataset (especially graph features).

Another concern to me that is related to the above is what Twitter API endpoint I want to use so that I can construct and preprocess the data point identically to the dataset (especially the graph part). Thus, sharing the code you've used to go from raw data coming from Twitter API to such a dataset would be extremely helpful.

Thank you in advance.

@msharara1998
Copy link
Author

The issue concerning the preprocessed data extends to the fact that if we want to adopt this dataset for training a model, and we want to predict on a new data point, we should preprocess this data point. However, we need to, for example, apply min-max scaling as a normalization. Hence, we should subtract the minimum of each feature from the dataset, similarly for division by max-min. The problem is that we do not have these values. I think this issue should be solved in order for anyone to benefit from this dataset. Thanks

@GraphDetec
Copy link
Owner

MGTAB is a standardized data set. The code for standardized the data is as follows:

df_train = read_info_data("./train_new.json")
df_train_feature = exact_and_process_feature(df_train, with_label, 'labeled_df_train.json')
df_test = read_info_data("./test_new.json")
df_test_feature = exact_and_process_feature(df_test, with_label, 'labeled_df_test.json')

if task == 1:
    df_train_feature.pop("isBot")
    df_test_feature.pop("isBot")
else:
    df_train_feature.pop("category")
    df_test_feature.pop("category")

numerical_cols = [
    "followers_count",
    "friends_count",
    "listed_count",
    "created_at",
    "favourites_count",
    "statuses_count",
    'screen_name_length',
    'name_length',
    'description_length',
    'followers_friends_ratios',
]

df = pd.concat([df_train_feature, df_test_feature], ignore_index=True)
df_name = df['screen_name']
df[numerical_cols] = MinMaxScaler().fit_transform(df[numerical_cols])

The feature processing function is as follows:

def exact_and_process_feature(df, with_label, file_name):
    if not os.path.exists('./process/'+file_name):
        to_drop = list(df.keys())
        not_to_drop = [
            # 'has_extended_profile',
            "profile_use_background_image",
            "default_profile",
            "default_profile_image",
            "verified",
            "geo_enabled",
            'profile_background_image_url',
            'url',
            'profile_background_color',
            'profile_sidebar_fill_color',
            'profile_sidebar_border_color',
            "followers_count",
            "friends_count",
            "listed_count",
            "created_at",
            "favourites_count",
            "statuses_count",
            "screen_name",
            'name',
            'description',
            'id',
            'friends_list',
            'followers_list',
            'mention_list',
            'url_list',
            'hashtag_list'
        ]

        if with_label:
            not_to_drop.append('category')
            not_to_drop.append('isBot')

        for key in not_to_drop:
            to_drop.remove(key)

        df.drop(columns=to_drop, axis=1, inplace=True)
        df = change_df_dtypes(df)
        df.to_json('./process/'+file_name)
        print('saving {}'.format(file_name))
    else:
        df = pd.read_json('./process/'+file_name)
        print('loading existing {}'.format(file_name))
    return df


   def change_df_dtypes(df):
    df = df.fillna(0)
    df["followers_count"] = np.log2((df["followers_count"].astype("int64") + 1))
    df["friends_count"] = np.log2((df["friends_count"].astype("int64") + 1))
    df["listed_count"] = np.log2((df["listed_count"].astype("int64") + 1))
    df["created_at"] = pd.to_numeric(pd.to_datetime(df["created_at"])) / 365 / 24 / 60 / 60 / 1000000000
    df["favourites_count"] = np.log2((df["favourites_count"].astype("int64") + 1))
    df["statuses_count"] = np.log2((df["statuses_count"].astype("int64") + 1))
    df['screen_name_length'] = ""
    for i, each in enumerate(df['screen_name']):
        df['screen_name_length'][i] = len(each)

    df['name_length'] = ""
    for i, each in enumerate(df['name']):
        df['name_length'][i] = len(each)
    del df['name']

    df['description_length'] = ""
    for i, each in enumerate(df['description']):
        df['description_length'][i] = len(each)
    del df['description']

    df['followers_friends_ratios'] = ""
    for i, each in enumerate(df['followers_count']):
        df['followers_friends_ratios'][i] = df['friends_count'][i] / (each + 1)

    # bool feature
    df["default_profile"] = df["default_profile"].astype("int8")
    df["default_profile_image"] = df["default_profile_image"].astype("int8")
    df["geo_enabled"] = df["geo_enabled"].astype("int8")
    df["profile_use_background_image"] = df["profile_use_background_image"].astype("int8")
    df["verified"] = df["verified"].astype("int8")

    df['is_default_profile_background_color'] = ''
    for i, each in enumerate(df['profile_background_color']):
        if each is not None:
            if each == 'F5F8FA':
                df['is_default_profile_background_color'][i] = 1
            elif each == '':
                df['is_default_profile_background_color'][i] = 1
            else:
                df['is_default_profile_background_color'][i] = 0
        else:
            df['is_default_profile_background_color'][i] = 1
    del df['profile_background_color']

    df['is_default_profile_sidebar_fill_color'] = ''
    for i, each in enumerate(df['profile_sidebar_fill_color']):
        if each is not None:
            if each == 'DDEEF6':
                df['is_default_profile_sidebar_fill_color'][i] = 1
            elif each == '':
                df['is_default_profile_sidebar_fill_color'][i] = 1
            else:
                df['is_default_profile_sidebar_fill_color'][i] = 0
        else:
            df['is_default_profile_sidebar_fill_color'][i] = 1
    del df['profile_sidebar_fill_color']

    df['is_default_profile_sidebar_border_color'] = ''
    for i, each in enumerate(df['profile_sidebar_border_color']):
        if each is not None:
            if each == 'C0DEED':
                df['is_default_profile_sidebar_border_color'][i] = 1
            elif each == '':
                df['is_default_profile_sidebar_border_color'][i] = 1
            else:
                df['is_default_profile_sidebar_border_color'][i] = 0
        else:
            df['is_default_profile_sidebar_border_color'][i] = 1
    del df['profile_sidebar_border_color']

    df['has_url'] = ''
    for i, each in enumerate(df['url']):
        if each is not None:
            if each != 0:
                df['has_url'][i] = 1
            else:
                df['has_url'][i] = 0
        else:
            df['has_url'][i] = 0
    del df['url']

    df['has_profile_background_image_url'] = ''
    for i, each in enumerate(df['profile_background_image_url']):
        if each is not None:
            if each != 0:
                df['has_profile_background_image_url'][i] = 1
            else:
                df['has_profile_background_image_url'][i] = 0
        else:
            df['has_profile_background_image_url'][i] = 0
    del df['profile_background_image_url']

    return df

def read_info_data(json_path):
    with open(json_path, "r") as f:
        data = json.loads(f.read())
    df = pd.json_normalize(data=data)

    return df

@msharara1998
Copy link
Author

msharara1998 commented Feb 2, 2023

image
As you may notice, all numerical features are between 0 and 1. To my knowledge, this is called normalization. A standardized distribution does not necessarily yields values between 0 and 1. After inspecting the code, it seems that you applied min-max scaling rather than, as mentioned in your paper, z-score standardization.

So, as I previously mentioned, we can't use this dataset as long as we do not have the minimum and the maximum of each feature's values in your dataset, so it might be helpful to share the unprocessed dataset.

I much appreciate your sharing of the code.

@msharara1998
Copy link
Author

Or at least share the minimum and maximum of each feature in the dataset, for a complete reproducibility

@msharara1998
Copy link
Author

Sorry for being persistent. But I bet when you publicly release a dataset, then your aim is that people benefit from the dataset.
If your released dataset is normalized (using MinMaxScaler), no one can benefit from it unless he has the minimum and maximum for each feature.

Here is an example to clarify my point:
suppose you trained an XGBoost model on a dataset and used the following code:

X = df[features].to_numpy()
y = df[target_label].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
    
scaler = MinMaxScaler()
    
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model.fit(x_train, x_test)

Now, having the model trained, suppose we want to predict the label of a new datapoint x_1, this datapoint must be minmax-scaled on the same scaler that was used on the training data, otherwise we will have wrong results (since we need to minmax the datapoint on the same values used in the training data to be consistent)

x_1 = scaler.transform(x_1)
model.predict(x_1)
...

I hope you take this into consideration otherwise no one can benefit from the released dataset and part of your efforts would be in vain.

@GraphDetec
Copy link
Owner

MGTAB is a normalization heterogeneous graph data set with multiple relations, and effective feature extraction has been carried out. As you say, the original features are not visible, since we hope that readers can directly use the processed data.

Part of the original data has been sent to your email, we hope it will be helpful to your research.

@msharara1998
Copy link
Author

Thanks for sharing. But there is a win-win solution for both of us: just share the minimum and the maximum for each numerical feature, please. In this way, no user information is disclosed, and at the same time, every one can benefit properly and correctly from the dataset.
Thanks in advance!

@amr-galal
Copy link

Hello,

In the Appendix of your paper, section A.1., you mention that min/max values of features are made public on the repository. But I can't find it. Could you point me to it?

If you haven't published them, then I'd agree with @msharara1998 that no one can benefit from you great work on new/other data!

Screenshot 2023-06-08 at 3 13 17 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants