Fastest way to perform complex search on pandas dataframeAdd one row to pandas DataFrameSelecting multiple columns in a pandas dataframeAdding new column to existing DataFrame in Python pandasDelete column from pandas DataFrameHow do I get the row count of a pandas DataFrame?How to iterate over rows in a DataFrame in Pandas?Writing a pandas DataFrame to CSV fileSelect rows from a DataFrame based on values in a column in pandasGet list from pandas DataFrame column headersConvert list of dictionaries to a pandas DataFrame

About the paper by Buekenhout, Delandtsheer, Doyen, Kleidman, Liebeck and Saxl

When editor does not respond to the request for withdrawal

Are skill challenges an official option or homebrewed?

What is Gilligan's full name?

Am I being scammed by a sugar daddy?

Fastest way from 8 to 7

Is time complexity more important than space complexity?

As easy as Three, Two, One... How fast can you go from Five to Four?

Is the first of the 10 Commandments considered a mitzvah?

Must a CPU have a GPU if the motherboard provides a display port (when there isn't any separate video card)?

Can I use 220 V outlets on a 15 ampere breaker and wire it up as 110 V?

Do they make "karaoke" versions of concertos for solo practice?

Part of my house is inexplicably gone

Can I get a photo of an Ancient Arrow?

What does this line mean in Zelazny's The Courts of Chaos?

Is it true that "only photographers care about noise"?

In American Politics, why is the Justice Department under the President?

Idiom for 'person who gets violent when drunk"

Realistic, logical way for men with medieval-era weaponry to compete with much larger and physically stronger foes

Why is my Taiyaki (Cake that looks like a fish) too hard and dry?

A life of PhD: is it feasible?

Can an open source licence be revoked if it violates employer's IP?

Nth term of Van Eck Sequence

What publication claimed that Michael Jackson died in a nuclear holocaust?



Fastest way to perform complex search on pandas dataframe


Add one row to pandas DataFrameSelecting multiple columns in a pandas dataframeAdding new column to existing DataFrame in Python pandasDelete column from pandas DataFrameHow do I get the row count of a pandas DataFrame?How to iterate over rows in a DataFrame in Pandas?Writing a pandas DataFrame to CSV fileSelect rows from a DataFrame based on values in a column in pandasGet list from pandas DataFrame column headersConvert list of dictionaries to a pandas DataFrame






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








14















I am trying to figure out the fastest way to perform search and sort on a pandas dataframe. Below are before and after dataframes of what I am trying to accomplish.



Before:



flightTo flightFrom toNum fromNum toCode fromCode
ABC DEF 123 456 8000 8000
DEF XYZ 456 893 9999 9999
AAA BBB 473 917 5555 5555
BBB CCC 917 341 5555 5555


After search/sort:



flightTo flightFrom toNum fromNum toCode fromCode
ABC XYZ 123 893 8000 9999
AAA CCC 473 341 5555 5555


In this example I am essentially trying to filter out 'flights' that exist in between end destinations. This should be done by using some sort of drop duplicates method but what leaves me confused is how to handle all of the columns. Would a binary search be the best way to accomplish this? Hints appreciated, trying hard to figure this out.



possible edge case:



What if the data is switched up and our end connections are in the same column?



flight1 flight2 1Num 2Num 1Code 2Code
ABC DEF 123 456 8000 8000
XYZ DEF 893 456 9999 9999


After search/sort:



flight1 flight2 1Num 2Num 1Code 2Code
ABC XYZ 123 893 8000 9999


This case logically shouldn't happen. After all how can you go DEF-ABC and DEF-XYZ? You can't, but the 'endpoints' would still be ABC-XYZ










share|improve this question
























  • Are the connecting flights always adjacent in the data frame?

    – Mike
    May 28 at 14:14











  • np.where(condition)

    – Dadu Khan
    May 28 at 14:14











  • how about df['flightFrom'].shift() != df['fightTo']?

    – IanS
    May 28 at 14:17












  • @Mike the information can be completely random in the DataFrame

    – MaxB
    May 28 at 14:18






  • 1





    @IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

    – Erfan
    May 28 at 14:26


















14















I am trying to figure out the fastest way to perform search and sort on a pandas dataframe. Below are before and after dataframes of what I am trying to accomplish.



Before:



flightTo flightFrom toNum fromNum toCode fromCode
ABC DEF 123 456 8000 8000
DEF XYZ 456 893 9999 9999
AAA BBB 473 917 5555 5555
BBB CCC 917 341 5555 5555


After search/sort:



flightTo flightFrom toNum fromNum toCode fromCode
ABC XYZ 123 893 8000 9999
AAA CCC 473 341 5555 5555


In this example I am essentially trying to filter out 'flights' that exist in between end destinations. This should be done by using some sort of drop duplicates method but what leaves me confused is how to handle all of the columns. Would a binary search be the best way to accomplish this? Hints appreciated, trying hard to figure this out.



possible edge case:



What if the data is switched up and our end connections are in the same column?



flight1 flight2 1Num 2Num 1Code 2Code
ABC DEF 123 456 8000 8000
XYZ DEF 893 456 9999 9999


After search/sort:



flight1 flight2 1Num 2Num 1Code 2Code
ABC XYZ 123 893 8000 9999


This case logically shouldn't happen. After all how can you go DEF-ABC and DEF-XYZ? You can't, but the 'endpoints' would still be ABC-XYZ










share|improve this question
























  • Are the connecting flights always adjacent in the data frame?

    – Mike
    May 28 at 14:14











  • np.where(condition)

    – Dadu Khan
    May 28 at 14:14











  • how about df['flightFrom'].shift() != df['fightTo']?

    – IanS
    May 28 at 14:17












  • @Mike the information can be completely random in the DataFrame

    – MaxB
    May 28 at 14:18






  • 1





    @IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

    – Erfan
    May 28 at 14:26














14












14








14


8






I am trying to figure out the fastest way to perform search and sort on a pandas dataframe. Below are before and after dataframes of what I am trying to accomplish.



Before:



flightTo flightFrom toNum fromNum toCode fromCode
ABC DEF 123 456 8000 8000
DEF XYZ 456 893 9999 9999
AAA BBB 473 917 5555 5555
BBB CCC 917 341 5555 5555


After search/sort:



flightTo flightFrom toNum fromNum toCode fromCode
ABC XYZ 123 893 8000 9999
AAA CCC 473 341 5555 5555


In this example I am essentially trying to filter out 'flights' that exist in between end destinations. This should be done by using some sort of drop duplicates method but what leaves me confused is how to handle all of the columns. Would a binary search be the best way to accomplish this? Hints appreciated, trying hard to figure this out.



possible edge case:



What if the data is switched up and our end connections are in the same column?



flight1 flight2 1Num 2Num 1Code 2Code
ABC DEF 123 456 8000 8000
XYZ DEF 893 456 9999 9999


After search/sort:



flight1 flight2 1Num 2Num 1Code 2Code
ABC XYZ 123 893 8000 9999


This case logically shouldn't happen. After all how can you go DEF-ABC and DEF-XYZ? You can't, but the 'endpoints' would still be ABC-XYZ










share|improve this question
















I am trying to figure out the fastest way to perform search and sort on a pandas dataframe. Below are before and after dataframes of what I am trying to accomplish.



Before:



flightTo flightFrom toNum fromNum toCode fromCode
ABC DEF 123 456 8000 8000
DEF XYZ 456 893 9999 9999
AAA BBB 473 917 5555 5555
BBB CCC 917 341 5555 5555


After search/sort:



flightTo flightFrom toNum fromNum toCode fromCode
ABC XYZ 123 893 8000 9999
AAA CCC 473 341 5555 5555


In this example I am essentially trying to filter out 'flights' that exist in between end destinations. This should be done by using some sort of drop duplicates method but what leaves me confused is how to handle all of the columns. Would a binary search be the best way to accomplish this? Hints appreciated, trying hard to figure this out.



possible edge case:



What if the data is switched up and our end connections are in the same column?



flight1 flight2 1Num 2Num 1Code 2Code
ABC DEF 123 456 8000 8000
XYZ DEF 893 456 9999 9999


After search/sort:



flight1 flight2 1Num 2Num 1Code 2Code
ABC XYZ 123 893 8000 9999


This case logically shouldn't happen. After all how can you go DEF-ABC and DEF-XYZ? You can't, but the 'endpoints' would still be ABC-XYZ







python pandas binary-search-tree






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 28 at 18:40







MaxB

















asked May 28 at 14:07









MaxBMaxB

1879




1879












  • Are the connecting flights always adjacent in the data frame?

    – Mike
    May 28 at 14:14











  • np.where(condition)

    – Dadu Khan
    May 28 at 14:14











  • how about df['flightFrom'].shift() != df['fightTo']?

    – IanS
    May 28 at 14:17












  • @Mike the information can be completely random in the DataFrame

    – MaxB
    May 28 at 14:18






  • 1





    @IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

    – Erfan
    May 28 at 14:26


















  • Are the connecting flights always adjacent in the data frame?

    – Mike
    May 28 at 14:14











  • np.where(condition)

    – Dadu Khan
    May 28 at 14:14











  • how about df['flightFrom'].shift() != df['fightTo']?

    – IanS
    May 28 at 14:17












  • @Mike the information can be completely random in the DataFrame

    – MaxB
    May 28 at 14:18






  • 1





    @IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

    – Erfan
    May 28 at 14:26

















Are the connecting flights always adjacent in the data frame?

– Mike
May 28 at 14:14





Are the connecting flights always adjacent in the data frame?

– Mike
May 28 at 14:14













np.where(condition)

– Dadu Khan
May 28 at 14:14





np.where(condition)

– Dadu Khan
May 28 at 14:14













how about df['flightFrom'].shift() != df['fightTo']?

– IanS
May 28 at 14:17






how about df['flightFrom'].shift() != df['fightTo']?

– IanS
May 28 at 14:17














@Mike the information can be completely random in the DataFrame

– MaxB
May 28 at 14:18





@Mike the information can be completely random in the DataFrame

– MaxB
May 28 at 14:18




1




1





@IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

– Erfan
May 28 at 14:26






@IanS check the values in fromNum, fromCode expected output, that's what makes this question complex imo.

– Erfan
May 28 at 14:26













2 Answers
2






active

oldest

votes


















12














This is network problem , so we using networkx , notice , here you can have more than two stops , which means you can have some case like NY-DC-WA-NC



import networkx as nx
G=nx.from_pandas_edgelist(df, 'flightTo', 'flightFrom')

# create the nx object from pandas dataframe

l=list(nx.connected_components(G))

# then we get the list of components which as tied to each other ,
# in a net work graph , they are linked
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]

# then from the above we can create our map dict ,
# since every components connected to each other ,
# then we just need to pick of of them as key , then map with others

d=k: v for d in L for k, v in d.items()

# create the dict for groupby , since we need _from as first item and _to as last item
grouppd=dict(zip(df.columns.tolist(),['first','last']*3))
df.groupby(df.flightTo.map(d)).agg(grouppd) # then using agg with dict yield your output

Out[22]:
flightTo flightFrom toNum fromNum toCode fromCode
flightTo
0 ABC XYZ 123 893 8000 9999
1 AAA CCC 473 341 5555 5555


Installation networkx




  • Pip: pip install networkx


  • Anaconda: conda install -c anaconda networkx





share|improve this answer




















  • 2





    great answer! Looked into networkx couple times, will do more now!

    – Erfan
    May 28 at 14:21






  • 2





    @Erfan love the enthusiasm ;) same here(for networkx)

    – anky_91
    May 28 at 14:22






  • 2





    This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

    – Erfan
    May 28 at 14:24







  • 1





    @Erfan ok let me working on it

    – WeNYoBen
    May 28 at 14:24






  • 1





    Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

    – Prayson W. Daniel
    May 28 at 15:40


















6














Here's a NumPy solution, which might be convenient in the case performance is relevant:



def remove_middle_dest(df):
x = df.to_numpy()
# obtain a flat numpy array from both columns
b = x[:,0:2].ravel()
_, ix, inv = np.unique(b, return_index=True, return_inverse=True)
# Index of duplicate values in b
ixs_drop = np.setdiff1d(np.arange(len(b)), ix)
# Indices to be used to replace the content in the columns
replace_at = (inv[:,None] == inv[ixs_drop]).argmax(0)
# Col index of where duplicate value is, 0 or 1
col = (ixs_drop % 2) ^ 1
# 2d array to index and replace values in the df
# index to obtain values with which to replace
keep_cols = np.broadcast_to([3,5],(len(col),2))
ixs = np.concatenate([col[:,None], keep_cols], 1)
# translate indices to row indices
rows_drop, rows_replace = (ixs_drop // 2), (replace_at // 2)
c = np.empty((len(col), 5), dtype=x.dtype)
c[:,::2] = x[rows_drop[:,None], ixs]
c[:,1::2] = x[rows_replace[:,None], [2,4]]
# update dataframe and drop rows
df.iloc[rows_replace, 1:] = c
return df.drop(rows_drop)



Which fo the proposed dataframe yields the expected output:



print(df)
flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 DEF XYZ 456 893 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 893 8000 9999
2 AAA CCC 473 341 5555 5555


This approach does not assume any particular order in terms of the rows where the duplicate is, and the same applies to the columns (to cover the edge case described in the question). If we use for instance the following dataframe:



 flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 XYZ DEF 893 456 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 456 8000 9999
2 AAA CCC 473 341 5555 5555





share|improve this answer

























  • Would this generalize to the case where the flights are randomly distributed over the dataframe?

    – Erfan
    May 28 at 14:38











  • I think the only problem is //2

    – WeNYoBen
    May 28 at 14:48











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f56344082%2ffastest-way-to-perform-complex-search-on-pandas-dataframe%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









12














This is network problem , so we using networkx , notice , here you can have more than two stops , which means you can have some case like NY-DC-WA-NC



import networkx as nx
G=nx.from_pandas_edgelist(df, 'flightTo', 'flightFrom')

# create the nx object from pandas dataframe

l=list(nx.connected_components(G))

# then we get the list of components which as tied to each other ,
# in a net work graph , they are linked
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]

# then from the above we can create our map dict ,
# since every components connected to each other ,
# then we just need to pick of of them as key , then map with others

d=k: v for d in L for k, v in d.items()

# create the dict for groupby , since we need _from as first item and _to as last item
grouppd=dict(zip(df.columns.tolist(),['first','last']*3))
df.groupby(df.flightTo.map(d)).agg(grouppd) # then using agg with dict yield your output

Out[22]:
flightTo flightFrom toNum fromNum toCode fromCode
flightTo
0 ABC XYZ 123 893 8000 9999
1 AAA CCC 473 341 5555 5555


Installation networkx




  • Pip: pip install networkx


  • Anaconda: conda install -c anaconda networkx





share|improve this answer




















  • 2





    great answer! Looked into networkx couple times, will do more now!

    – Erfan
    May 28 at 14:21






  • 2





    @Erfan love the enthusiasm ;) same here(for networkx)

    – anky_91
    May 28 at 14:22






  • 2





    This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

    – Erfan
    May 28 at 14:24







  • 1





    @Erfan ok let me working on it

    – WeNYoBen
    May 28 at 14:24






  • 1





    Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

    – Prayson W. Daniel
    May 28 at 15:40















12














This is network problem , so we using networkx , notice , here you can have more than two stops , which means you can have some case like NY-DC-WA-NC



import networkx as nx
G=nx.from_pandas_edgelist(df, 'flightTo', 'flightFrom')

# create the nx object from pandas dataframe

l=list(nx.connected_components(G))

# then we get the list of components which as tied to each other ,
# in a net work graph , they are linked
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]

# then from the above we can create our map dict ,
# since every components connected to each other ,
# then we just need to pick of of them as key , then map with others

d=k: v for d in L for k, v in d.items()

# create the dict for groupby , since we need _from as first item and _to as last item
grouppd=dict(zip(df.columns.tolist(),['first','last']*3))
df.groupby(df.flightTo.map(d)).agg(grouppd) # then using agg with dict yield your output

Out[22]:
flightTo flightFrom toNum fromNum toCode fromCode
flightTo
0 ABC XYZ 123 893 8000 9999
1 AAA CCC 473 341 5555 5555


Installation networkx




  • Pip: pip install networkx


  • Anaconda: conda install -c anaconda networkx





share|improve this answer




















  • 2





    great answer! Looked into networkx couple times, will do more now!

    – Erfan
    May 28 at 14:21






  • 2





    @Erfan love the enthusiasm ;) same here(for networkx)

    – anky_91
    May 28 at 14:22






  • 2





    This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

    – Erfan
    May 28 at 14:24







  • 1





    @Erfan ok let me working on it

    – WeNYoBen
    May 28 at 14:24






  • 1





    Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

    – Prayson W. Daniel
    May 28 at 15:40













12












12








12







This is network problem , so we using networkx , notice , here you can have more than two stops , which means you can have some case like NY-DC-WA-NC



import networkx as nx
G=nx.from_pandas_edgelist(df, 'flightTo', 'flightFrom')

# create the nx object from pandas dataframe

l=list(nx.connected_components(G))

# then we get the list of components which as tied to each other ,
# in a net work graph , they are linked
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]

# then from the above we can create our map dict ,
# since every components connected to each other ,
# then we just need to pick of of them as key , then map with others

d=k: v for d in L for k, v in d.items()

# create the dict for groupby , since we need _from as first item and _to as last item
grouppd=dict(zip(df.columns.tolist(),['first','last']*3))
df.groupby(df.flightTo.map(d)).agg(grouppd) # then using agg with dict yield your output

Out[22]:
flightTo flightFrom toNum fromNum toCode fromCode
flightTo
0 ABC XYZ 123 893 8000 9999
1 AAA CCC 473 341 5555 5555


Installation networkx




  • Pip: pip install networkx


  • Anaconda: conda install -c anaconda networkx





share|improve this answer















This is network problem , so we using networkx , notice , here you can have more than two stops , which means you can have some case like NY-DC-WA-NC



import networkx as nx
G=nx.from_pandas_edgelist(df, 'flightTo', 'flightFrom')

# create the nx object from pandas dataframe

l=list(nx.connected_components(G))

# then we get the list of components which as tied to each other ,
# in a net work graph , they are linked
L=[dict.fromkeys(y,x) for x, y in enumerate(l)]

# then from the above we can create our map dict ,
# since every components connected to each other ,
# then we just need to pick of of them as key , then map with others

d=k: v for d in L for k, v in d.items()

# create the dict for groupby , since we need _from as first item and _to as last item
grouppd=dict(zip(df.columns.tolist(),['first','last']*3))
df.groupby(df.flightTo.map(d)).agg(grouppd) # then using agg with dict yield your output

Out[22]:
flightTo flightFrom toNum fromNum toCode fromCode
flightTo
0 ABC XYZ 123 893 8000 9999
1 AAA CCC 473 341 5555 5555


Installation networkx




  • Pip: pip install networkx


  • Anaconda: conda install -c anaconda networkx






share|improve this answer














share|improve this answer



share|improve this answer








edited May 28 at 18:18

























answered May 28 at 14:19









WeNYoBenWeNYoBen

139k84878




139k84878







  • 2





    great answer! Looked into networkx couple times, will do more now!

    – Erfan
    May 28 at 14:21






  • 2





    @Erfan love the enthusiasm ;) same here(for networkx)

    – anky_91
    May 28 at 14:22






  • 2





    This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

    – Erfan
    May 28 at 14:24







  • 1





    @Erfan ok let me working on it

    – WeNYoBen
    May 28 at 14:24






  • 1





    Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

    – Prayson W. Daniel
    May 28 at 15:40












  • 2





    great answer! Looked into networkx couple times, will do more now!

    – Erfan
    May 28 at 14:21






  • 2





    @Erfan love the enthusiasm ;) same here(for networkx)

    – anky_91
    May 28 at 14:22






  • 2





    This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

    – Erfan
    May 28 at 14:24







  • 1





    @Erfan ok let me working on it

    – WeNYoBen
    May 28 at 14:24






  • 1





    Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

    – Prayson W. Daniel
    May 28 at 15:40







2




2





great answer! Looked into networkx couple times, will do more now!

– Erfan
May 28 at 14:21





great answer! Looked into networkx couple times, will do more now!

– Erfan
May 28 at 14:21




2




2





@Erfan love the enthusiasm ;) same here(for networkx)

– anky_91
May 28 at 14:22





@Erfan love the enthusiasm ;) same here(for networkx)

– anky_91
May 28 at 14:22




2




2





This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

– Erfan
May 28 at 14:24






This answer deserves to be broken down in more explanation :) (so I can learn from it hehe)

– Erfan
May 28 at 14:24





1




1





@Erfan ok let me working on it

– WeNYoBen
May 28 at 14:24





@Erfan ok let me working on it

– WeNYoBen
May 28 at 14:24




1




1





Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

– Prayson W. Daniel
May 28 at 15:40





Best answer I have read. Is it possible to edit variables, using information names, instead of letters, and expands the solution. Or best write a post/article on medium(or other place) explaining this methodology

– Prayson W. Daniel
May 28 at 15:40













6














Here's a NumPy solution, which might be convenient in the case performance is relevant:



def remove_middle_dest(df):
x = df.to_numpy()
# obtain a flat numpy array from both columns
b = x[:,0:2].ravel()
_, ix, inv = np.unique(b, return_index=True, return_inverse=True)
# Index of duplicate values in b
ixs_drop = np.setdiff1d(np.arange(len(b)), ix)
# Indices to be used to replace the content in the columns
replace_at = (inv[:,None] == inv[ixs_drop]).argmax(0)
# Col index of where duplicate value is, 0 or 1
col = (ixs_drop % 2) ^ 1
# 2d array to index and replace values in the df
# index to obtain values with which to replace
keep_cols = np.broadcast_to([3,5],(len(col),2))
ixs = np.concatenate([col[:,None], keep_cols], 1)
# translate indices to row indices
rows_drop, rows_replace = (ixs_drop // 2), (replace_at // 2)
c = np.empty((len(col), 5), dtype=x.dtype)
c[:,::2] = x[rows_drop[:,None], ixs]
c[:,1::2] = x[rows_replace[:,None], [2,4]]
# update dataframe and drop rows
df.iloc[rows_replace, 1:] = c
return df.drop(rows_drop)



Which fo the proposed dataframe yields the expected output:



print(df)
flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 DEF XYZ 456 893 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 893 8000 9999
2 AAA CCC 473 341 5555 5555


This approach does not assume any particular order in terms of the rows where the duplicate is, and the same applies to the columns (to cover the edge case described in the question). If we use for instance the following dataframe:



 flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 XYZ DEF 893 456 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 456 8000 9999
2 AAA CCC 473 341 5555 5555





share|improve this answer

























  • Would this generalize to the case where the flights are randomly distributed over the dataframe?

    – Erfan
    May 28 at 14:38











  • I think the only problem is //2

    – WeNYoBen
    May 28 at 14:48















6














Here's a NumPy solution, which might be convenient in the case performance is relevant:



def remove_middle_dest(df):
x = df.to_numpy()
# obtain a flat numpy array from both columns
b = x[:,0:2].ravel()
_, ix, inv = np.unique(b, return_index=True, return_inverse=True)
# Index of duplicate values in b
ixs_drop = np.setdiff1d(np.arange(len(b)), ix)
# Indices to be used to replace the content in the columns
replace_at = (inv[:,None] == inv[ixs_drop]).argmax(0)
# Col index of where duplicate value is, 0 or 1
col = (ixs_drop % 2) ^ 1
# 2d array to index and replace values in the df
# index to obtain values with which to replace
keep_cols = np.broadcast_to([3,5],(len(col),2))
ixs = np.concatenate([col[:,None], keep_cols], 1)
# translate indices to row indices
rows_drop, rows_replace = (ixs_drop // 2), (replace_at // 2)
c = np.empty((len(col), 5), dtype=x.dtype)
c[:,::2] = x[rows_drop[:,None], ixs]
c[:,1::2] = x[rows_replace[:,None], [2,4]]
# update dataframe and drop rows
df.iloc[rows_replace, 1:] = c
return df.drop(rows_drop)



Which fo the proposed dataframe yields the expected output:



print(df)
flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 DEF XYZ 456 893 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 893 8000 9999
2 AAA CCC 473 341 5555 5555


This approach does not assume any particular order in terms of the rows where the duplicate is, and the same applies to the columns (to cover the edge case described in the question). If we use for instance the following dataframe:



 flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 XYZ DEF 893 456 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 456 8000 9999
2 AAA CCC 473 341 5555 5555





share|improve this answer

























  • Would this generalize to the case where the flights are randomly distributed over the dataframe?

    – Erfan
    May 28 at 14:38











  • I think the only problem is //2

    – WeNYoBen
    May 28 at 14:48













6












6








6







Here's a NumPy solution, which might be convenient in the case performance is relevant:



def remove_middle_dest(df):
x = df.to_numpy()
# obtain a flat numpy array from both columns
b = x[:,0:2].ravel()
_, ix, inv = np.unique(b, return_index=True, return_inverse=True)
# Index of duplicate values in b
ixs_drop = np.setdiff1d(np.arange(len(b)), ix)
# Indices to be used to replace the content in the columns
replace_at = (inv[:,None] == inv[ixs_drop]).argmax(0)
# Col index of where duplicate value is, 0 or 1
col = (ixs_drop % 2) ^ 1
# 2d array to index and replace values in the df
# index to obtain values with which to replace
keep_cols = np.broadcast_to([3,5],(len(col),2))
ixs = np.concatenate([col[:,None], keep_cols], 1)
# translate indices to row indices
rows_drop, rows_replace = (ixs_drop // 2), (replace_at // 2)
c = np.empty((len(col), 5), dtype=x.dtype)
c[:,::2] = x[rows_drop[:,None], ixs]
c[:,1::2] = x[rows_replace[:,None], [2,4]]
# update dataframe and drop rows
df.iloc[rows_replace, 1:] = c
return df.drop(rows_drop)



Which fo the proposed dataframe yields the expected output:



print(df)
flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 DEF XYZ 456 893 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 893 8000 9999
2 AAA CCC 473 341 5555 5555


This approach does not assume any particular order in terms of the rows where the duplicate is, and the same applies to the columns (to cover the edge case described in the question). If we use for instance the following dataframe:



 flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 XYZ DEF 893 456 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 456 8000 9999
2 AAA CCC 473 341 5555 5555





share|improve this answer















Here's a NumPy solution, which might be convenient in the case performance is relevant:



def remove_middle_dest(df):
x = df.to_numpy()
# obtain a flat numpy array from both columns
b = x[:,0:2].ravel()
_, ix, inv = np.unique(b, return_index=True, return_inverse=True)
# Index of duplicate values in b
ixs_drop = np.setdiff1d(np.arange(len(b)), ix)
# Indices to be used to replace the content in the columns
replace_at = (inv[:,None] == inv[ixs_drop]).argmax(0)
# Col index of where duplicate value is, 0 or 1
col = (ixs_drop % 2) ^ 1
# 2d array to index and replace values in the df
# index to obtain values with which to replace
keep_cols = np.broadcast_to([3,5],(len(col),2))
ixs = np.concatenate([col[:,None], keep_cols], 1)
# translate indices to row indices
rows_drop, rows_replace = (ixs_drop // 2), (replace_at // 2)
c = np.empty((len(col), 5), dtype=x.dtype)
c[:,::2] = x[rows_drop[:,None], ixs]
c[:,1::2] = x[rows_replace[:,None], [2,4]]
# update dataframe and drop rows
df.iloc[rows_replace, 1:] = c
return df.drop(rows_drop)



Which fo the proposed dataframe yields the expected output:



print(df)
flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 DEF XYZ 456 893 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 893 8000 9999
2 AAA CCC 473 341 5555 5555


This approach does not assume any particular order in terms of the rows where the duplicate is, and the same applies to the columns (to cover the edge case described in the question). If we use for instance the following dataframe:



 flightTo flightFrom toNum fromNum toCode fromCode
0 ABC DEF 123 456 8000 8000
1 XYZ DEF 893 456 9999 9999
2 AAA BBB 473 917 5555 5555
3 BBB CCC 917 341 5555 5555

remove_middle_dest(df)

flightTo flightFrom toNum fromNum toCode fromCode
0 ABC XYZ 123 456 8000 9999
2 AAA CCC 473 341 5555 5555






share|improve this answer














share|improve this answer



share|improve this answer








edited May 29 at 12:35

























answered May 28 at 14:32









yatuyatu

24.4k42252




24.4k42252












  • Would this generalize to the case where the flights are randomly distributed over the dataframe?

    – Erfan
    May 28 at 14:38











  • I think the only problem is //2

    – WeNYoBen
    May 28 at 14:48

















  • Would this generalize to the case where the flights are randomly distributed over the dataframe?

    – Erfan
    May 28 at 14:38











  • I think the only problem is //2

    – WeNYoBen
    May 28 at 14:48
















Would this generalize to the case where the flights are randomly distributed over the dataframe?

– Erfan
May 28 at 14:38





Would this generalize to the case where the flights are randomly distributed over the dataframe?

– Erfan
May 28 at 14:38













I think the only problem is //2

– WeNYoBen
May 28 at 14:48





I think the only problem is //2

– WeNYoBen
May 28 at 14:48

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f56344082%2ffastest-way-to-perform-complex-search-on-pandas-dataframe%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020