ESMA Discussion Paper Response

February 15, 2016
 
 
 
 

Reply form for the Discussion Paper on Benchmarks Regulation

Responding to this paper
The European Securities and Markets Authority (ESMA) invites responses to the specific questions listed in Consultation Paper on the European Single Electronic Format (ESEF), published on the ESMA website.

Instructions
Please note that, in order to facilitate the analysis of the large number of responses expected, you are requested to use this file to send your response to ESMA so as to allow us to process it properly. There-fore, ESMA will only be able to consider responses which follow the instructions described below:

    • use this form and send your responses in Word format (pdf documents will not be considered ex-cept for annexes);
    • do not remove the tags of type < ESMA_QUESTION_DP_BMR_1> – i.e. the response to one question has to be framed by the 2 tags corresponding to the question; and
    • if you do not have a response to a question, do not delete it and leave the text “TYPE YOUR TEXT HERE” between the tags.
    Responses are most helpful:
    • if they respond to the question stated;
    • contain a clear rationale, including on any related costs and benefits; and
    • describe any alternatives that ESMA should consider

Naming protocol
In order to facilitate the handling of stakeholders responses please save your document using the following format:
ESMA_DP_BMR _NAMEOFCOMPANY_NAMEOFDOCUMENT.
E.g. if the respondent were XXXX, the name of the reply form would be:
ESMA_DP_BMR _XXXX_REPLYFORM or
ESMA_DP_BMR _XXXX_ANNEX1
To help you navigate this document more easily, bookmarks are available in “Navigation Pane” for Word 2010 and in “Document Map” for Word 2007.

Deadline
Responses must reach us by 31 March 2016.
All contributions should be submitted online at www.esma.europa.eu under the heading ‘Your in-put/Consultations’.

Publication of responses
All contributions received will be published following the end of the consultation period, unless otherwise requested. Please clearly indicate by ticking the appropriate checkbox in the website submission form if you do not wish your contribution to be publicly disclosed. A standard confidentiality statement in an email message will not be treated as a request for non-disclosure. Note also that a confidential response may be requested from us in accordance with ESMA’s rules on access to docu-ments. We may consult you if we receive such a request. Any decision we make is reviewable by ESMA’s Board of Appeal and the European Ombudsman.

Data protection
Information on data protection can be found at www.esma.europa.eu under the headings ‘Legal notice’ and ‘Data protection’.


Introduction
Please make your introductory comments below, if any:


Q1: Do you agree that an index’s characteristic of being “made available to the public” should be defined in an open manner, possibly reflecting the current channels and modalities of publication of existing benchmarks, in order not to unduly restrict the number of benchmarks in scope?

Yes, “made available to the public” should be defined in an open manner and as broadly as possible so as to not unduly restrict the number of benchmarks in scope. It is not in the best interest of investor protec-tion if administrators could stop “publishing” as a way to get out of the BMR. IIA feels an appropriate definition would read as follows: “’made available to the public’ means the provision of any benchmark to another entity for the purposes of issuing or creating a financial instrument or financial contract or benchmarking the performance of an investment fund”.

Q2: Do you have any proposals on which aspects of the publication process of an index should be considered in order for it to be deemed as having made the index available to the public, for the pur-pose of the BMR?

The mechanism of publishing should not matter, but the fact it is made available. The publication process should focus on any provision of a benchmark to another entity for the purposes of issuing or creating a financial instrument or financial contract or benchmarking the performance of an investment fund. Focus-ing on any such provision or dissemination ensures that the scope of Regulation is not unduly restricted in terms of the number of benchmarks covered.

Q3: Do you agree with ESMA’s proposal to align the administering the arrangements for determin-ing a benchmark with the IOSCO principle on the overall responsibility of the administrator? Which other characteristics/activities would you regard as covered by Article 3(1) point 3(a)?

Yes, ESMA’s proposal to align with the IOSCO Principles on the overall responsibility of the administrator is an appropriate and sound approach. The administrator should retain overall responsibility of all aspects of the benchmark determination process, including “development, determination and dissemination, opera-tion and governance” as stated in paragraph 25. Further, essential to retaining overall responsibility is the concept that the administrator be responsible for “the development of the methodology and the estab-lishment of governance arrangements” so that the administrator is responsible for those requirements set forth in the Regulation that pertain to methodology quality, in particular the characteristics of a methodol-ogy set forth under Article 13. To be clear, administrators have control to whom they license indices for use, but it does not mean they control the distribution of indices.

Q4: Do you agree with ESMA’s proposal for a definition of issuance of a financial instrument? Are there additional aspects that this definition should cover?

As set forth in Paragraph 30, we agree that the issuance of financial instruments should “more generally cover the act of creating a financial instrument which references an index or a composition of indices […] or of entering into reciprocal contracts with third parties”. Note that we believe “reference” covers both a product whose price is directly determined by the benchmark and a product meant to track the bench-mark. The issuance of financial instruments should be interpreted broadly because “use of a benchmark” determines the scope of the regulation and we feel that the scope of the regulation should be broad enough to include indices used for all types of investment products. The inclusion of issuance of financial instrument should be broad in scope though certain types will be difficult, if not impossible, to obtain accurate data to measure when applying to thresholds for significant/non-significant categorizations.

Q5: Do you think that the business activities of market operators and CCPs in connection with possible creation of financial instruments for trading could fall under the specification of “issuance of a financial instrument which references an index or a combination of indices”? If not, which element of the “use of benchmark” definition could cover these business activities?

The creation of financial instruments by any entity should be deemed to fall under the concept of “issu-ance of a financial instrument”. The concept of issuer should not be the most important consideration. There can be any number of issuers for the same index based on legal structure and regulatory authority of the issuer. The fact is all financial instruments have an issuer and it should not matter to the BMR who the issuer is for it to apply.

Q6: Do you agree with the proposed list of appropriate governance arrangements for the oversight function? Would you propose any additional structure or changes to the proposed structures?

IIA agrees that the governance structure should be appropriate to the benchmark, ownership and control structure and nature scale and complexity of the benchmark set out on Paragraph 43 of the DP. We also agree that the governance structure can be accomplished through a series of committees. IIA believes the governance structure and terms of reference should be made public as IOSCO recommended. However, we believe that care needs to be taken when considering the introduction external parties into benchmark governance structures, such as stakeholders and INEDS. This could introduce very serious conflicts of interest into benchmark administration, jeopardize the independence of the index and could conflict with securities disclosures laws. This is particularly true for widely used benchmarks. Stakeholders include those parties who have issued financial products off of the index and are exactly the parties that could benefit from particular index changes. Additionally, if those parties could get access to price sensitive information (such as index changes, index performance) before the rest of the market, that would violate securities disclosure laws and this would interject the conflicts of interests the BMR is trying to eliminate. Market participants with sufficient knowledge about which the market is designed to measure will be the stakeholders who have the conflicts. Further, the BMR did not require external parties to be included in the oversight and as such requiring it in the technical standards would be going beyond the Level 1 text.

IIA is concerned the oversight function is being conflated with the need for a sound benchmark govern-ance regime. The oversight function should not be tasked with questioning analytical or editorial deci-sions made by a governance regime. We do not agree that the function of benchmark oversight set out in Paragraph 37 of the DP, ensuring “there is an effective challenge to the board or equivalent management of the benchmark administrator.” Benchmark governance and management of a benchmark administra-tor’s business are distinct functions and should be handled differently.

It seems like the regulation should focus on the existence of conflicts of interests, especially when “over-sight committee members” have a vested interest in the level of the benchmark. Independent administra-tors do not have positions in the component securities nor do they issue products on their benchmarks.

Q7: Do you believe these proposals sufficiently address the needs of all types of benchmarks and administrators? If not, what characteristics do such benchmarks have that would need to be addressed in the proposals?

IIA agrees that the governance structure should be appropriate to the benchmark, ownership and control structure and nature scale and complexity of the benchmark. IIA also agrees that the governance struc-ture can be accomplished through a series of committees. We believe the governance structure and terms of reference should be made public as IOSCO recommended. However, IIA believes that care needs to be taken when considering the introduction external parties into benchmark governance structures, such as stakeholders and INEDs. It could introduce very serious conflicts of interest into benchmark administra-tion, jeopardize the independence of the index and could conflict with securities disclosures laws. This is particularly true for widely used benchmarks. Stakeholders include those parties who have issued finan-cial products off of the index and are exactly the parties that could benefit from particular index changes. Additionally, if those parties could get access to price sensitive information (such as index changes, index performance) before the rest of the market, that would violate securities disclosure laws. Further, the BMR did not require external parties to be included in the oversight and as such requiring it in the tech-nical standards would be going beyond the Level 1 text.

We do not agree that the function of benchmark oversight is to challenge the board or management of the benchmark administrator. Benchmark governance is about benchmark governance and management of a business is about management of a business. They are different functions and the BMR is not about regulating the management of benchmark administrator businesses.

Q8: To the extent that you provide benchmarks, do you have in place a pre-existing committee, introduced through other EU legislation, or otherwise, which could satisfy the requirements of an oversight function under Article 5a? Please describe the structure of the committee and the reasons for establishing it.

Many IIA members have had an oversight structure in place for many years for the purpose of escalation, consistency in applying our methodology, and good governance. Per the recommendation of IOSCO, many (if not all) IIA members have published their governance structure and their terms of reference on their websites.

Q9: Do you agree that an administrator could establish one oversight function for all the bench-marks it provides? Do you think it is appropriate for an administrator to have multiple oversight func-tions where it provides benchmarks that have different methodologies, users or seek to measure very different markets or economic realities?

An oversight function can include different sub-functions. Different methodologies may not trigger the need for different oversight. However different asset classes may because of the differences in types of input data. Oversight should be adapted to the different needs of the benchmark. If there are common processes, then common oversight function may be appropriate. If there are different processes, then different oversight functions may be appropriate. It should also scale to the type of organization adminis-tering the benchmark. For example, it is not practical for an administrator calculating hundreds of thou-sands of indexes to have hundreds of committees. There needs to be a realistic, proportional approach.

Q10: If an administrator provides more than one critical benchmark, do you support the approach of one oversight function exercising oversight over all the critical benchmarks? Do you think it is neces-sary for an oversight function to have sub-functions, to account for the different needs of different types of benchmarks?

Oversight should be adapted to the different needs of the benchmark. Different asset classes may be-cause of the differences in types of input data. If there are common processes, then common oversight function may be appropriate. If there are different processes, then different oversight functions may be appropriate.

Q11: Where an administrator provides critical benchmarks and significant or non-significant bench-marks, do you think it should establish different oversight functions depending on the nature, scale and complexity of the critical benchmarks versus the significant or non-significant benchmarks?

The differences in oversight come from the differences in the benchmarks (including the asset class being measured), not the amount tracking the benchmarks. For example, how a real estate index is calculated using data points about commercial buildings is much different than how equity indexes are calculated using stock exchange and other data. Those differences are around input data, calculation processes, publication timetables, etc. That is what requires different oversight, with expertise required in the various areas, not whether EUR250M or EUR50BN are tracking the index.

Q12: In which cases would you agree that contributors should be prevented from participating in oversight committees?

It is difficult to say so in all circumstances, the regulation should neither require nor prevent participation depending on circumstances unique to the benchmark being measured as long as any conflicts of interest are adequately managed.

Q13: Do you foresee additional costs to your business or, if you are not an administrator, to the business of others resulting from the establishment of multiple oversight functions in connection with the different businesses performed and/or the different nature, scale and type of benchmarks provid-ed? Please describe the nature, and where possible provide estimates, of these costs.

This is difficult for our members to quantify without knowing the extent of the changes, but if administra-tors were required to materially change our governance structure, then there could be costs associated, depending on what was required to change. For example if a full oversight function had to all be in a single specific location there would be a cost of relocating people or hiring those senior people with the expertise in that location.

Q14: Do you agree that, in all cases, an oversight function should not be responsible for overseeing the business decisions of the management body?

No. The index oversight function is about oversight of the administration of the index, not the business as a whole. That is the responsibility of management and that is governed by other rules about company board membership, management, etc. It is not appropriate to include management of a business into benchmark oversight functions. Further, IIA does not believe that the roles of the oversight function is to “challenge” benchmark administration. Its purpose is to advise, provide an escalation point and con-sistent application of policy as well as identify and address policy risk.

Q15: Do you support the proposed positioning of the oversight function of an administrator? If not, please explain your reasons why this positioning may not be appropriate.

IIA does not believe that the analogy of board committees (such as remuneration) is accurate for index governance functions. Nor do we believe that the roles of the oversight function is to “challenge” bench-mark administration. Its purpose is to advise, provide an escalation point and consistent application of policy as well as identify and address policy risk. It is also not the purpose of the benchmark oversight function to sight to challenge the management of the business. Its role is oversight of the benchmark administration. The BMR is not regulation management of benchmark administrators. Independent benchmark administrators should not be forced t include INEDs in any oversight function as set forth in the third bullet of Paragraph 43 of the DP. This could introduce conflicts of interests that did not previous-ly exist because those with sufficient market knowledge about that market might be stakeholders with a vested interest in the outcome of the benchmark level.

Q16: Do you have any additional comments with regard to the procedures for the oversight function as well as the composition and positioning of the oversight function within an administrator’s organi-sation?

The oversight function should be relevant to the benchmark and should not introduce unnatural require-ments into benchmark administration. Mandating that parties that gain to benefit from benchmark changes (such as contributors, users and stakeholders) be included in governance committees could interject conflicts of interests into the process that do not exist today and would be antithetical to the purpose of the BMR.

Q17: Do you agree with the proposed list of elements of procedures required for all oversight func-tions? Should different procedures be employed for different types of benchmarks?

Yes to both questions.

Q18: Do you agree with the proposed treatment of conflicts of interest arising from the composition of an oversight function? Have you identified any additional conflicts which ESMA should consider in drafting the RTS?

Yes to the first question. Stakeholders on committees can also introduce conflicts of interest.

Q19: Do you agree with the list of records to be kept by the administrator for input data verifica-tion? If not, please specify which information is superfluous / which additional information is needed and why.

The list of records required to be kept by administrators needs to be aligned with the type of input data used for a particular benchmark and the risks posed by such category of input data. The list of records may make sense for input data sourced from contributors, provided a contribution is appropriately de-fined. Independent administrators do not have access to books and records of contributors and do not know if data is excluded or not. Input data sourced from data vendors which “is readily available to an administrator” should not be considered a contribution in line with Paragraph 139. Our view, as further expressed in our response to question 45, is that “readily available” means “input data not created for the sole purpose of calculating a benchmark.” This is a view expressed by various members of ESMA. In this context, readily available data should be treated similarly to regulated data since the benchmark adminis-trator does not have access to the details of the price formation. As discussed in Paragraph 68 of the consultation paper, administrators don’t have to keep records of input data for regulated data bench-marks but verifiability “in this case must be understood as checking the provenance and transmission of the input data used” which can be done through appropriate quality assurance practices. IIA also believes ESMA will need to clarify exactly what data the requirements would apply. Benchmark administrators should not have to duplicate “readily available” data produced for purposes of the benchmark. Data retention may already be required for supervised entities to store under MiFID and other regulation mak-ing those data duplicative.

Q20: Do you agree that, for the information to be transmitted to the administrator in view of ensur-ing the verifiability of input data, weekly transmission is sufficient? Would you instead consider it appropriate to leave the frequency of transmission to be defined by the administrator (i.e. in the code of conduct)?

Although weekly sounds like a reasonable frequency requirement for the transmission of certain bench-marks for verifiability purposes, but the frequency of information transmission should be based on how frequently data are available and the benchmark is measured. Some benchmarks are based on monthly or quarterly data so weekly transmission as a rule for all benchmarks may not make sense. Obviously, the timing should be a function of the natural frequency of data submission and benchmark publication.

Q21: Do you agree with the concept of appropriateness as elaborated in this section?

The determination of the appropriateness of input data is an important component of a benchmark and the administrator should seek to ensure that each of its benchmarks utilizes input data that yield a mean-ingful and representative result. As set forth in Paragraph 74, appropriateness of input data should be “linked to the methodology of a benchmark” and “is up to the administrator, when establishing the meth-odology”. The appropriateness of input data needs to be determined at the discretion of the administrator and then disclosed in the methodology document. Once disclosed, market participants can assess wheth-er the benchmark is appropriate for their own uses and objectives. With respect to the various checks to assist an administrator in determining appropriateness, many concepts included in Paragraph 75 are helpful for only specific types of input data (e.g. some apply only to contributions, while others to regulated data). The administrator is the entity responsible for the quality of the methodology, so it needs to deter-mine which concepts are suitable to determine the appropriateness of the input data it uses.

Q22: Do you see any other checks an administrator could use to verify the appropriateness of input data?

No

Q23: Would you consider it useful that the administrator maintains records of the analyses per-formed to evaluate the appropriateness of input data?

Every administrator should disclose the input data used in each of its benchmarks in its description of its methodology and the rationale for choosing the input data used. The use of input data is driven by the asset class of the benchmark. The methodology should explain the use of the data. In the event an administrator is questioned on the appropriateness of the input data used for a particular benchmark, it should be prepared to support its decision but the records an administrator retains in connection with its evaluation of input data should be at the administrator’s discretion in accordance with its own internal record retention policy.

Q24: Do you see other possible measures to ensure verifiability of input data?

No

Q25: Do you agree with the identification of the concepts and underpinning activities of evaluation, validation and verifiability, as used in this section?

Evaluation, validation and verifiability are reasonable designations for the types of testing/ underpinning activities for input data but it is important that these concepts are applied in a proportional manner and only required if needed for the particular benchmark. It is important to understand that all three concepts do not apply to all benchmark types.

In particular, verification should not be required for input data that is readily available. As set forth above in our response to question 19 and discussed in paragraph 68 of the consultation paper, verifiability of input data for regulated data benchmarks “must be understood as checking the provenance and trans-mission of the input data used” which can be done through appropriate quality assurance practices. We believe that this same concept should apply to input data that is readily available. ”Readily available” should be defined as “input data not created for the sole purpose of calculating a benchmark.” With respect to input data that are not a contribution, we feel that validation as described in Paragraphs 79 and 82 the pertinent activity for quality assurance. Other thoughts:

    (a) Evaluation: With the exception of contributed data (which does not include regulated data and “readily available” data), the administrator does not have insights at the price source level. Evalu-ation would almost be impossible with respect to real time indices.
    (b) Validation: As mentioned, acceptable for most input data as our interpretation is that the primary means of validation are appropriate testing and quality assurance practices.
    (c) Verification: This is difficult to undertake for most input data and irrelevant for regulated data benchmarks and benchmarks that use “readily available” (as discussed above).

Q26: Do you agree that all staff involved in input data submission should undergo training, but that such training should be more elaborate / should be repeated more frequently where it concerns front office staff contributing to benchmarks?

As an independent index provider, we do not deal with the potential conflicts of interest that arise from a front office contributor scenario. However, where a situation or structure presents an increased likelihood of conflicts of interest such as the front office contributor structure, more in depth and frequent training can assist in mitigating any such potential conflicts.

Q27: Do you agree to the three lines of defence-principle as an ideal type of internal oversight architecture?

The components of the “three lines of defence” principle comprise an appropriate oversight function and, although the functions, policies and procedures may vary, these components are likely to be present in internal oversight function. It is important that the administrator or contributor have the discretion to organize the oversight function as it deems appropriate to the circumstances and the relevant competent authority has the ability to review and comment if it deems necessary.

Q28: Do you identify other elements that could improve oversight at contributor level?

TYPE YOUR TEXT HERE

Q29: Do you agree with the list of elements contained in a conflict of interest policy? If not, please state which elements should be added / which elements you consider superfluous and why.

The list of elements is appropriate but some items may need to be qualified. For example, automation of contribution is a sound approach in an effort to mitigate conflicts of interest but may not be appropriate for the type of data and should not be required “wherever possible”. It may be possible but not appropri-ate to automate such process.

Q30: Do you agree that where expert judgement is relied on and/or discretion is used additional appropriate measures to ensure verifiability of input data should be imposed? If not, please specify examples and reasons why you disagree.

As noted in our reply to Q25, the need to implement the verification process as described in paragraph 82, i.e. “recreate a given input data point based on records kept by the administrator of data supplied by contributor”, should be limited to contributions. Any regulated data (as noted in Paragraph 104 and else-where) as well as any other “readily available” input data (i.e. “input data not created for the sole purpose of calculating a benchmark”) should not be subject to a verification process. Validation should be appro-priate to protect the integrity of such input data. That said, when expert judgment and/or discretion is relied on to produce any input data, appropriate measures should be in place so that it is exercised ap-propriately and in a way that mitigates any potential conflicts of interest. With respect to contributions that may involve the exercise of expert judgment or discretion, such measures should be appropriate to ensure the verifiability of the contributions.

Q31: Do you agree to the list of criteria that can justify differentiation? If not, please specify why you disagree.

The list of criteria to justify differentiation appears appropriate with the exception of the “size of contribu-tors” referenced in paragraphs 108 and 109. Whether a contributor is large, small or somewhere in between, the integrity of the contribution and benchmark determination processes need to be protected. An appropriate oversight measures should be implemented by any entity involved in the process. Any differentiation in assessing the appropriateness of the oversight function for a contributor should depend on the type of input data it provides and any potential conflicts of interest posed. The appropriateness of the number of contributors should also be taken into account and the proportionality of the contribution to the benchmark. If 5 of 100 contributors are not participating, it is entails a different risk level if 5 of 7 contributors are not participating.

Q32: Do you agree to the list of elements that are amenable to proportional implementation? If not, please specify why you disagree.

Generally, the list of elements amendable to proportional implementation but all entities should have an appropriate degree of training as well as a conflict of interest policy that effectively address the particular conflicts of issues that arise from each entity’s circumstances.

Q33: Do you agree to the list of elements that are not amenable to proportional implementation? If not, please specify why you disagree.

Yes, the list of elements that are not amendable to proportional implementation is appropriate.

Q34: Do you consider the proposed list of key elements sufficiently granular “to allow users to understand how a benchmark is provided and to assess its representativeness, its relevance to particu-lar users and its appropriateness as a reference for financial instruments and contracts”?

IIA believes that the transparency of the methodology of benchmarks where it is appropriate as Recital 24 states. It should be as broad as possible without compromising the intellectual property of the benchmark owner and ensuring that such transparency does not expose the benchmark to the risk of front-running. This was acknowledged during the ESMA-EBA principles consultation process: “ESMA-EBA was indeed of the opinion that transparency may be limited only in exceptional circumstances based on legal provisions safeguarding confidentiality and intellectual property rights. In addition, when a benchmark’s methodolo-gy is fully disclosed, and input data is publicly available, there could be a risk of front-running” (page 40, paragraph 118). IIA believes that “where applicable” should apply to the third and fourth bullet points in Paragraph 122 of the Discussion Paper.

Q35: Beyond the list of key elements, could you identify other elements of benchmark methodology that should be disclosed? If yes, please explain the reason why these elements should be disclosed.

The list of elements identified in paragraph 122 towards ensuring transparency of benchmark methodolo-gy is sufficient to achieve the policy objectives of ESMA.

Q36: Do you agree that the proposed key elements must be disclosed to the public (linked to Article 3, para 1, subpara 1, point (a))? If not, please specify why not.

IIA believes that the proposed elements should be made available (as long it does not violate intellectual property rights) to the public and are usually posted on the administrator’s web site.

Q37: Do you agree with ESMA’s proposal about the information to be made public concerning the internal review of the methodology? Please suggest any other information you consider useful to disclose on the topic.

IIA agrees that information proposed by ESMA in paragraph 127 should be made available to the public as long as it does not violate the administrator’s intellectual property rights.

Q38: Do you agree with the above proposals to specify the information to be provided to benchmark users and, more in general, stakeholders regarding material changes in benchmark methodology?

Yes, IIA agrees with the above proposals to specify the information to be provided to benchmark users and, more in general, stakeholders regarding material changes in benchmark methodology.

Q39: Do you agree, in particular, on the opportunity that also the replies received in response to the consultation are made available to the public, where allowed by respondents?

Yes, IIA believes that replies only to critical benchmarks received in response to the consultation are made available to the public, when allowed by respondents. IIA feels going beyond the critical benchmarks would be disproportionate.

Q40: Do you agree that the publication requirements for key elements of methodology apply re-gardless of benchmark type? If not, please state which type of benchmark would be exempt / which elements of methodology would be exempt and why.

Yes, IIA agrees that the publication requirements for key elements of methodology apply regardless of benchmark type in line with Recital 24. Transparency is essential to benchmarks, but methodologies should include only applicable to that benchmark. In Paragraph 122 of the DP, “where applicable” should also apply to the third and fourth bullet points.

Q41: Do you agree that the publication requirements for the internal review of methodology apply regardless of benchmark type? If not, please state which information regarding the internal review could be differentiated and according to which characteristic of the benchmark or of its input data or of its methodology.

Yes, IIA agrees that the publication requirements for the internal review of methodology apply regardless of benchmark type.

Q42: Do you agree that, in the requirements regarding the procedure for material change, the pro-portionality built into the Level 1 text covers all needs for proportional application?

Yes, IIA agrees that, in the requirements regarding the procedure for material change, the proportionality built into the Level 1 text covers all needs for proportional application.

Q43: Do you agree that a benchmark administrator could have a standard code for all types of benchmarks? If not, should there be separate codes depending on whether a benchmark is critical, significant or non-significant? Please take into account your answer to this question when responding to all subsequent questions.

The main driver of the content of the code of conduct is based on type of input data used (e.g., the asset class) not the number of assets tracking the index. The input data for the different asset classes have different data scrubbing processes, different publication schedules different types of contributors which will impact the code of conduct. Common processes and procedures may apply across the benchmarks within the asset class.

Codes of conduct should not be required for those benchmarks based on regulated data, regardless of what other data points are used in the index calculation and this point should be made clear to avoid confusion.

Codes of conduct for benchmarks based on submissions by contributors should focus on data quality and should not require the administrator to dictate or change the contributor’s corporate policies. The adminis-trator cannot control this and has no authority legally or commercially to require these changes, especially if the contributors are not supervised entities, are voluntary contributors, and/or are outside the EU.

Q44: Do you believe that an administrator should be mandated to tailor a code of conduct, depend-ing on the market or economic reality it seeks to measure and/or the methodology applied for the determination of the benchmark? Please explain your answer using examples of different categories or sectors of benchmarks, where applicable.

No. Benchmarks measuring performance in the same asset class, can use the same types of data across different regions, sectors, etc. For example, you can have country real estate benchmarks or a regional real estate benchmark using the same data points and the quality checks. Also different methodologies can use the same data points. What drives different codes of conduct is the different asset class meas-ured by the benchmark and the different input data, e.g., equities, fixed income, etc.

Q45: Do you agree with the above requirements for a contributor’s contribution process? Is there anything else that should be included?

No. Codes of conduct for benchmarks based on submissions by contributors should focus on data quality and should not require the administrator to dictate or change the contributor’s corporate policies. The administrator cannot control this and has no authority legally or commercially to require these changes, especially if the contributors are not supervised entities, are voluntary contributors, and/or are outside the EU.

Q46: Do you agree that the details of the code of conduct to be specified by ESMA may still allow administrators to tailor the details of their codes of conduct with respect to the specific benchmarks provided?

Yes, if relevant to that specific benchmark.

Q47: Do you agree that such information should be required from contributors under the code of conduct? Should any additional information be requested?

No. For input data that is not created solely for the benchmark and instead exists outside of and inde-pendent of benchmark calculation, job description, role, experience, competence, experience of the indi-vidual submitters is irrelevant, specifically because they are not determining the data. They are merely collecting readily available data and passing it onwards. Further, the administrator is not in a position to demand who can/cannot submit that data, in particular in the context of voluntary submissions and is in absolutely no position to dictate the contributor’s compensation structure, employee performance reviews etc.

Q48: Are their ways in which contributors may manage possible conflicts of interest at the level of the submitters? Should those conflicts, where managed, be disclosed to the administrator?

Contributors should disclose any conflicts of interests so they can be evaluated. If the administrator and contributor are separate, independent companies, the administrator has no legal authority or commercial ability to require the contributors to do so.

Q49: Do you foresee any obstacles to the administrator’s ability to evaluate the authorisation of any submitters to contribute input data on behalf of a contributor?

Fundamentally, where contributors and administrators are separate, independent companies, the adminis-trator has no legal authority or commercial ability require the contributors to change their corporate poli-cies, compensation structures, IT policies, etc. This is especially true where contributions are voluntary and where the contributors are not supervised entities in the EU. Further, if there is no issue with data quality but the contributor fails to have one policy in place, is it really advisable to demand that the data not be used. Is it the goal of the BMR to shut down index in an opaque market because of contributors’ corporate policies vs data submissions?

Q50: Do you agree that a contributor’s contribution process should foresee clear rules for the exclu-sion of data sources? Should any other information be supplied to administrators to allow them to ensure contributors have provided all relevant input data?

This doesn’t make sense for all benchmarks. The methodology should outline what data points are used in and needed for the benchmark calculation.

Q51: Do you think that the listed procedures for submitting input data are comprehensive? If not, what is missing?

It does not take into account the difference between input data that is created solely for the purpose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation.

Q52: Do you agree that rules are necessary to provide consistency of contributors’ behaviour over the time? Should this be set out in the code of conduct or in the benchmark methodology, or both?

No. They are excessive for input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation.

Documents should not include duplicative information. It has the potential to create confusion of multiple documents have to include the same information which may be expressed in a slightly different way. Multiple documents can get out of synch and change to the benchmark or a process can trigger changes in multiple documents which can create a huge amount of extra work depending on the number of docu-ments.

Q53: Should policies, in addition to those set out in the methodology, be in place at the level of the contributors, regarding the use of discretion in providing input data?

There is a very important difference between input data that is created solely for the purpose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation. While policies may be advisable for the data to be used in the calculation of the benchmark, they do not make sense for other types of data.

Q54: Do you agree with the list of checks for validation purposes? What other methods could be included?

There is a very important difference between input data that is created solely for the purpose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation. While the list of checks may be advisable for data used in the calculation of the benchmark, they do not make sense for other types of data. Validation is the primary activity which we agree consists of comparison against prior data points and looking for outliners.

Q55: Do you agree with the minimum information requirement for record keeping? If not would you propose additional/alternative information?

Record keeping should not expand what is required in the Level 1 text under Article 9. What is important is what data was submitted and used and what was not and why.

Q56: Do you support the recording of the use of expert judgement and of discretion? Should admin-istrators require the same records for all types of benchmarks?

Yes, to the extent that expert judgment is defined accordance with the IOSCO Principles for Financial Benchmarks. There is a very important difference between input data that is created solely for the pur-pose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation.

Q57: Do you agree that an administrator could require contributors to have in place a documented escalation process to report suspicious transactions?

No. Where contributors and administrators are separate, independent companies, the administrator has no legal authority or commercial ability require the contributors to change their corporate policies, com-pensation structures, IT policies, etc. This is especially true where contributions are voluntary and where the contributors are not supervised entities in the EU.

Further, there is a very important difference between input data that is created solely for the purpose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation.

Q58: Do you agree with the list of policies, procedures and controls that would allow the identifica-tion and management of conflicts of interest? Should other requirements be included?

This will not apply to all benchmarks. There is a very important difference between input data that is created solely for the purpose of calculating the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation.

Q59: Do you have any additional comments with regard to the contents of a code of conduct in accordance with Article 9(2)?

There is a very important difference between input data that is created solely for the purpose of calculat-ing the benchmark and input data that is not created solely for the benchmark and instead exists outside of and independent of benchmark calculation. The codes of conduct should be based on the quality of the data not on procedures and policies of the contributors that the administrator cannot force or en-force. Requiring anything more risks disrupting a benchmark where there are no data quality issues.

Q60: Do you agree with the above list of requirements? Do you think that those requirements are appropriate for all benchmarks? If not what do you think should be the criteria we should use?

IIA agrees with the list of requirements set out by ESMA for supervised contributors. However, full applica-tion of these principles for all types of benchmarks and all data would be too burdensome for the contribu-tors and is likely to lead to a reduction in submissions. We encourage ESMA to consider proportionate application of these principles, aligning as closely with the code of conduct requirements for submitters. IIA members would like clarification

Q61: Do you agree that information regarding breaches to the BMR or to Code of Conduct should also be made available to the Benchmark Administrator?

Breaches to BMR or Code of conduct should be made available to the Benchmark Administrator (BMA) since this will be essential for the BMA to discharge its duties effectively. Specifically, breaches to BMR or code of conduct that result in a material change in final submissions should be reported to the adminis-trator as soon as the contributor becomes aware of them. This is because spurious input data could have implications on the quality of input data received by the BMA which would in turn have material impact on the final index or benchmark produced. However, if breaches do not result in material change in submis-sions the contributors can inform the administrator after the benchmark is published.

Q62: Do you think that the external audit covering benchmark activities, where available, should also be made available, on request, to the Benchmark Administrator?

IIA believes that relevant information from the external audit covering benchmark activities should be made available to the BMA upon request.

Q63: Do you agree with the proposed criteria for the specific elements of systems and controls as listed in Article 11(2)(a) to (c)? If not, what should be alternative criteria to substantiate these ele-ments?

IIA agrees with the proposed criteria for the specific elements of systems and controls for supervised contributors. We believe this will ensure good quality of input data is submitted to BMAs. These controls would also help address the conflict of interest issues that arise from contributions made from the front office.

Q64: Do you agree that the submitters should not be remunerated for the level of their contribution but could be remunerated for the quality of input and their ability to manage the conflicts of interest instead?

Administrators should have flexibility to establish mechanisms to incentivise submitters in relation to the desired depth breath and quality of their submissions. IIA agrees that the remuneration structure should not incentivise submitters to manipulate the data or maximise contributions without improving quality. However the meaning of terms such as “level of contribution” and “cost coverage” in the Discussion Paper (paragraph 181) is unclear.

Q65: What would be a reasonable delay for signing-off on the contribution? What are the reasons that would justify a delay in the sign off?

A number of submissions are made automatically by the contributing firm and ESMA recognizes this as much when it states that “it may not always be effective or proportionate to have a sign-off of the contri-bution before the data is provided to the administrator” in paragraph 182. In these cases, we believe that the rules should allow for a post publication sign off.

Q66: Is the mentioned delay an element that may be established by the administrator in line with the applicable methodology and in consideration of the underlying, of the type of input data and of supervised contributors?

In paragraph 182, ESMA has rightfully recognized that “it may not always be effective or proportionate to have a sign-off of the contribution before the data is provided to the administrator”. The delay in sign-off could be established by the benchmark administrator depending on the type of input data and type of underlying. For example, when contributing transaction data, an ex-ante sign-off of contributions may not be necessary and would only delay the process of benchmark publication

Q67: In case of a contribution made through an automated process what should be the adequate level of seniority for signing-off?

In case of a contribution made through an automated process the contribution process should be signed off by a person or a body who can provide effective challenge periodically (every month, every quarter) or in the event a contribution was in breach of the code of conduct.

Q68: Do you agree with the above policies? Are there any other policies that should be in place at contributor’s level when expert judgement is used?

IIA agrees that the above policies are sufficient to ensure submission of high quality input data and to also allow benchmark administrators to conduct ex-post verification of input data.

Q69: Do you agree with this approach? If so, what do you think are the main distinctions – amid the identified detailed measures that a supervised contributor will be required to put in place – that it is possible to introduce to cater for the different types, characteristics of benchmarks and of supervised contributors?

IIA believes that a stricter rules approach is reasonable for “those entities [...] of the regulation that may be authorised to deal in financial instruments” (Paragraph 188). There is an inherent conflict of interest for supervised contributors that do take positions in financial instruments, especially if the position refer-ences the same benchmark they are contributing to. However, also in paragraph 188, ESMA rightly estab-lished categories of entities where “less strict rules could apply” where such entities either “take a posi-tion on financial instruments as part of their core business” or “those for which this (positing taking) could only occur occasionally”. We believe ESMA should also recognize another category of contributors which are also BMAs but which do not take position in financial instruments. These entities should be subject to less strict rules than the entities described above.

Q70: Do you foresee additional costs to your business or, if you are not a supervised contributor, to the business of others resulting from the implementation of any of the listed requirements? Please describe the nature, and where possible provide estimates, of these costs.

IIA’s members are in a better position to provide estimates based on their circumstances.

Q71: Could the approach proposed, i.e. the use of the field total issued nominal amount in the context of MiFIR / MAR reference data, be used for the assessment of the “nominal amount” under BMR Article 13(1)(i) for bonds, other forms of securitised debt and money-market instruments? If not, please suggest alternative approaches

Note that this response applies to both critical benchmarks and the determination of thresholds for signifi-cant benchmarks. It is inherently inequitable to require benchmark administrators to categorize their benchmarks using data to which they have limited, inconsistent and, at times, or non-existent access. Administrators utilize a variety of licensing and fee arrangements, many of which either do not require reporting of underlying assets/notional amounts or produce inconsistent or limited insight into the amount/size of the underlying assets/notional amount utilizing their benchmarks. A system should be devised to capture as many assets as practical. If the system requires administrators to produce, obtain or collect such data, it is necessary to include two caveats: first, a good faith standard should be set so that administrators are not penalized where the prescribed process was applied but the results were uninten-tionally erroneous; and, second, absent a mandate by the competent authorities or ESMA, the license agreements of benchmark administrators should not be constrained by a demand of the licensor for these data, which are protected as proprietary and confidential information by many financial institutions. As such, we propose a system which varies by instrument types. The net asset value for each security should be used for calculation purposes to avoid over-counting.

    1. Category 1 – Funds in the EU – For UCITS funds and ETFs, we recommend requiring administra-tors use a source from a list of ESMA-approved providers (e.g. Morningstar) to obtain balances as of a certain date. In line with paragraph 1 above, Administrators will not be held responsible for any errors or omissions in calculating the threshold provided it uses data from this third party(ies).
    2. Category 2 – Structured Products listed/traded in the EU – In order to get a consistent count, our recommendation is that, given that these firms are all supervised entities in the EU, that a request from the NCA or ESMA on an annual basis be made to all benchmark administrators for a list of those firms for which a structured products license has been granted. The relevant authority then sends a mandatory survey to obtain relevant assets which will be reported back to the BA’s for in-clusion. Funded, not leveraged or notional amounts would be included.
    3. Category 3 –Exchange Traded Futures, and Options in the EU – netted value exposure should be used on relevant indices is calculated annually through an ESMA or NCA’s per above.
    4. Category 4 – Organized Trading Facility Derivatives in the EU –only net exposure of in-force trades as of a particular date would count to avoid over counting the exposures. These data should be sourced from ESMA or NCA’s depending on who has the data which is currently being reported.
    5. Category 5 – OTC Swaps, options, forwards, & other non-structured product derivatives in the EU. These instruments are excluded with the exception of securities from systematic internalizers. Since the definition for systematic internalizer under Directive 2014/65/EU is vague, the counting of OTC derivatives should be suspended until such time as firmly tangible standards can be es-tablished for systematic internalizers.
    6. Category 6 – Performance Benchmarking. We feel that assets benchmarked do not contribute to the risk of a benchmark in the same way as investment products actually tracking the benchmark do and therefore advocate excluding this category from calculation
    7. Category 7 – Asset Allocation/Fee Determination
    We feel that these assets do not contribute to the risk of a benchmark in the same way as investment funds actually tracking the benchmark do and therefore advocate excluding this category from calculation.

Additionally, there is no way for the administrators to know precisely which firms are utilizing their indices in this manner, let alone how many assets are involved. For instance, for a combined benchmark, the asset allocation of a portfolio may involve dozens of indices – how does one allocate assets to each one? What if it is a model portfolio – how does one count the assets in that scenario?

Q72: Are you aware of any shares in companies, other securities equivalent to shares in companies, partnerships or other entities, depositary receipts in respect of shares, emission allowances for which a benchmark is used as a reference?

IIA is not aware of any such scenarios.

Q73: Do you have any suggestion for defining the assessment of the nominal amount of these finan-cial instruments when they refer to a benchmark?

Please refer to our answer in Q71.

Q74: Do you agree with ESMA proposal in relation to the value of units in collective investment undertakings? If not, please explain why

The net asset value should be used in relation to the value of units in collective investment undertakings to avoid over counting.

Q75: Do you agree with the approach of using the notional amount, as used and defined in the EMIR reporting regime, for the assessment of notional amount of derivatives under BMR Article 13(1)(i)? If not, please suggest alternative approaches.

IIA feels that any reporting of assets should be on agreed, publically available data; the sources should of this data should be either be NCAs, other public data sources or independent third parties that collect and publish the data. Please see our answer to Q71 to ensure the assets are not being over counted.

Q76: Which are your views on the two options proposed to assess the net asset value of investment funds? Should you have a preference for an alternative option, please provide details and explain the reasons for your preference.

IIA feels that the second option to use the most recently available net asset value is the most reasonable one to provide accuracy and completeness; having gaps between the reporting periods for AIFMD and UCITS could distort the value at point of assessment.

Q77: Which are your views on the two approaches proposed to assess the nominal amount of finan-cial instruments other than derivatives, the notional amount of derivatives and the net asset value of an investment fund referencing a benchmark within a combination of benchmarks? Please provide details and explain the reasons for your preference. Do you think there are other possible approaches? If yes, please explain.

TYPE YOUR TEXT HERE

Q78: Do you agree with the ‘relative impact’ approach, i.e. define one or more value and “ratios” for each of the five areas (markets integrity; or financial stability; or consumers; or the real economy; or the financing of households and corporations) that need to be assessed according to Article 13(1)(c), subparagraph (iii)? If not, please elaborate on other options that you consider more suitable.

“Relevant impact” approach may work for local market IBORs where only national banks are allowed to participate because only the local mortgage holders are impacted. Markets within a country that is open to global investors become very subjective if you use a “relevant impact” approach and may have a negative impact on capital flows to those markets.

Q79: What kind of other objective grounds could be used to assess the potential impact of the discontinuity or unreliability of the benchmark besides the ones mentioned above (e.g. GDP, consumer credit agreement etc.)?

TYPE YOUR TEXT HERE

Q80: Do you agree with ESMA’s approach to further define the above criteria? Particularly, do you think that ESMA should develop more concrete guidance for the possible rejection of the NCA under Article 14c para 2? Do you believe that NCAs should take into consideration additional elements in their assessment?

IIA agrees with the approach to clarify and define the criteria. ESMA should develop clear guidance in relation to the acceptance and rejection under Article 14c P2.

Q81: Do you think that the fields identified for the template are sufficient for the competent author-ity and the stakeholders to form an opinion on the representativeness, reliability and integrity of a benchmark, notwithstanding the non-application of some material requirements? Could you suggest additional fields?

IIA feels ESMA should allow for discretion in order to enable administrators to use as much as possible from their IOSCO compliance statements when making a statement for the BMR.

According to Article 14c (6) and 14d (2), administrators of significant and non-significant benchmarks respectively may decide not to comply with a number of governance requirements following a “comply or explain approach”, i.e. the administrators need to explain why non-compliance is appropriate in a compli-ance statement, for which ESMA is empowered to develop a template.

It should be clear how the respective administrator should complete the statements and how it can offer enough information to the NCA if it has chosen to “explain” rather than “comply”; it should also be clear what additional information can be requested by the NCA to ensure compliance of the BMR. ESMA should consider also allowing a single statement, rather than numerous statements, for benchmarks that have substantially the same makeup even if they’re not part of the same “family”.

Q82: Do you agree with the suggested minimum aspects for defining the market or economic reality measured by the benchmark?

The principle that benchmarks should define a market or economic reality is fair and the description of that benchmark’s target market or economic reality should be made readily and publicly available.

The estimation of market size (transaction volume or other trading related metrics) should not be a re-quirement of the benchmark administrator. The benchmark should create a value to accurately reflect the reality of the market or economic segment regardless of trading characteristics.

It is difficult for an independent index administrator to take into consideration observable market liquidity and/or size and number of participants within a certain market segment while constructing a benchmark to reflect that market or economic reality in all circumstances. This will vary considerably based on the asset class and how the component securities trade. Those considerations should be implemented through the benchmark methodology documentation and made easily available. Guidelines cannot be easily estab-lished to govern each and every market segment or to create a reasonableness test. IIA believes there may be difficulties in measuring this for all benchmarks. IIA would also encourage avoiding duplicating reporting requirements and feels administrators should produce benchmark statements per family rather than per individual benchmark given the large number of benchmarks produced.

Q83: Do you think the circumstances under which a benchmark determination may become unrelia-ble can be sufficiently described by the suggested aspects?

These factors may not apply to all benchmarks. It is difficult, if not impossible, to mandate the scenarios up front. The administrator should explain the scenarios based on what the benchmark is intended to measure as explained in the methodology.

Q84: Do you agree with the minimum information on the exercise of discretion to be included in the benchmark statement?

Where discretion can be applied to a benchmark should be made clear through benchmark methodology documentation.
The benchmark should continue to follow IOSCO Principles in determination and governance of applicable rules which can account for areas of discretion.

Q85: Are there any further precise minimum contents for a benchmark statement that should apply to each benchmark beyond those stated in Art. 15(2) points (a) to (g) BMR?

It is important to provide information sufficient for market participants to understand the benchmark its objectives and methodology and the risks associated with them. Market participants and their financial advisors are the appropriate people to determine if a benchmark is suitable for use. Independent bench-mark administrators do not create the products or get paid to provide financial advice, much less, know what the ultimate investors’ financial needs are. The benchmark rationale should not be in the benchmark statement, but in the benchmark methodology. By describing the benchmark and its objective, the ra-tionale is included in the benchmark methodology.

Q86: Do you agree that a concise description of the additional requirements including references, if any, would be sufficient for the information purposes of the benchmark statement for interest rate benchmarks?

Yes, the concise description of additional requirements on interest rate benchmarks is sufficient.

Q87: Do you agree that the statement for commodity benchmarks should be delimited as de-scribed? Otherwise, what other information would be essential in your opinion?

Commodity benchmarks, when comprised of publicly listed contracts (e.g. physical or cash-settled futures listed on an exchange) should not be and thus we agree with the delineation between regulated data benchmark and non-regulated data commodity benchmark.

Q88: Do you agree with ESMA’s approach not to include further material requirements for the content of benchmark statements regarding regulated-data benchmarks?

Yes.

Q89: Do you agree with the suggested additional content required for statements regarding critical benchmarks? If not, please precise why and indicate what alternative or additional information you consider appropriate in case a benchmark qualifies as critical.

IIA agrees with the suggested additional content as long as the measurement metrics are clearly laid out and accessible. Please see our response to Q71.

Q90: Do you agree with the suggested additional requirements for significant benchmarks? Which of the three options proposed you prefer, and why?

Identifying a benchmark has been categorized as significant is useful information to include in a bench-mark statement. Option 1 is the better approach to avoid duplicative information and to ensure market participants and stakeholders are directed to the complete compliance statement to better understand the provisions not complied with and the rationale for it not applying.

Q91: Do you agree with the suggested additional requirements for non-significant benchmarks? If not, please explain why and indicate what alternative or additional information you consider appro-priate in case a benchmark is non-significant.

IIA does not feel it is necessary to identify a benchmark as non-significant with option 1 being the better approach to avoid duplicative information and to ensure stakeholders are directed to the complete compli-ance statement to better understand the provisions not complied with and the rationale for it not applying.

Q92: Are there any further contents for a benchmark statement that should apply to the various classes of benchmarks identified in this chapter?

Not at this time.

Q93: Do you agree with the approach outlined above regarding information of a general nature and financial information? Do you see any particular cases, such as certain types of providers, for which these requirements need to be adapted?

Disclosure of financial information is not justified by level 1, except for critical/systemically relevant benchmarks. Therefore a full disclosure of financial information does not seem proportional disclosure of financial information.

Q94: Do you agree with ESMA’s approach to the above points? Do you believe that any specific cases exist, related either to the type of provider or the type of conflict of interest, that require specific information to be provided in addition to what initially identified by ESMA?

ESMA should be assessing the overall ability of the administrator and if it is suitable for authorization. This seems overly prescriptive, for example, how does having the CV of management help assess what skills are needed for a particular firm? Different expertise may be needed for differing asset classes and it is not clear how ESMA could use this information in a meaningful way.

Q95: Do you agree with the proposals outlined for the above points? Do you see any areas requiring particular attention or adaptation?

Authorization seems it should be at the administrator level not at the individual benchmark level. Forcing the NCA to assess the data input, for example, seems to be missing the point. This disclosure is captured in other places such as the methodology and/or compliance statements.

Q96: Can you suggest other specific situations for which it is important to identify the information elements to be provided in the authorisation application?

This seems to be very broad and seems to be an over-reach of what is needed per the Level 1 text

Q97: Do you agree with the proposed approach towards registration? How should the information requirements for registration deviate from the requirements for authorisation?

TYPE YOUR TEXT HERE

Q98: Do you believe there are any specific types of supervised entities which would require special treatment within the registration regime? If yes, which ones and why?

If an entity provides the functions of a benchmark administrator, they should be regulated as a benchmark regulator even if they are a supervised entity in other aspects of their business. Because they may or may not be a supervised entity in other parts of their businesses does not eliminate the risks the BMR is trying to address.

Q99: Do you have any suggestions on which information should be included in the application for the recognition of a third country administrator?

IIA considers that the RTS should provide clarity, in line with the Level 1 text, as to under which circum-stances benchmarks based on exchange data from outside the EU would be considered regulated data benchmarks, enabling the administrator to subsequently benefit from the exemptions specified in Article 12a. To prevent market disruption and a decline of transparent investment vehicles allowing investors to benefit from the development of growth markets beyond the EU, data sourced from 3rd country trading venues subject to an equivalent regulation to MiFID should be considered regulated data.

As to the form and the content of the application for recognition, clear guidance is needed on issues which are critical for market access, like the involvement of “legal representative” in the oversight function, the content of audit reports etc. It should be acknowledged that the involvement of the representative in the oversight function can be achieved by different means, i.e. that the level of involvement of the representa-tive in the oversight function may depend on the circumstance of the case at hand.

In order to ensure legal certainty, clear guidance is needed as regards the information 3rd country admin-istrators need to submit to their Member State of reference in order to claim the exemptions subsequent to the classification to a specific benchmark category. In order to reflect established market practices and ensure the competitiveness of benchmark administrators in a global industry, cost related reasons as well as joint ventures with strategic partners in markets outside the EU should be acknowledged as objective reasons for an endorsement.

Q100: Do you agree with the general approach proposed by ESMA for the presentation of the infor-mation required in Article 21a(6) of the BMR?

Disclosure of financial information is not justified by the Level 1 text, except for critical/systemically rele-vant benchmarks. Therefore a full disclosure of financial information for all benchmarks does not seem proportional.

Q101: For each of the three above mentioned elements, please provide your views on what should be the measures to determine the conditions whether there is an ‘objective reason’ for the endorse-ment of a third country benchmark.

Of the elements put forward in paragraph 331, only (iii) seems relevant.

Q102: Do you consider that there are any other elements that could be taken into consideration to substantiate the ‘objective reason’ for the provision and endorsement for use in the Union of a third country benchmark or family of benchmarks?

Many benchmark administrators are global in nature because their clients and investors seek a common approach to benchmarks regardless of location. Administrators are likely to have multiple offices across the world doing many different functions. Many of the providers of EU benchmarks may be headquartered outside the EU. Regulation built on where a company is head quartered is an out-of-date concept.

Q103: Do you agree that in the situations identified above by ESMA the cessation or the changing of an existing benchmark to conform with the requirements of this Regulation could reasonably result in a force majeure event, frustrate or otherwise breach the terms of any financial contract or financial instrument which references a benchmark? If not, please explain the reasons why.

If a benchmark were to be changed due to need to follow ESMA guideline and result in breach of financial contracts, that benchmark should be allowed to continue its current duty with regards to those financial contracts without interruption.

Q104: Which other circumstances could cause the consequences mentioned in Article 39(3) in case existing benchmarks are due to be adapted to the Regulation or to be ceased?

Particular concerns to open-ended funds which track a benchmark that would have to undergo substantial changes to follow ESMA guidelines could greatly change the value of the financial contract to the bench-mark user (fund company) and client (investor) based their investment decision. As that situation is open ended, it is not easily resolved within the ESMA guidelines and must be further studied to understand investor impact.

Q105: Do you agree with the proposed definition of “force majeure event”? If not, please explain the reasons and propose an alternative.

TYPE YOUR TEXT HERE

Q106: Are the two envisaged options (with respect to the term until which a non-compliant bench-mark may be used) adequate: i.e. either (i) fix a time limit until when a non-compliant benchmark may be used or (ii) fix a minimum threshold which will trigger the prohibition to further use a non-compliant benchmark in existing financial instruments/financial contracts?

Yes, the two options are adequate, however further study is required to know why it is not in compliance in order to determine the time limit and threshold.

Q107: Which thresholds would be appropriate to foresee and how might a time limit be fixed? Please detail the reasons behind any suggestion.

A fixed time-limit to resolve a non-compliant benchmark for the financial instruments tracking the bench-mark can be useful to encourage a change in benchmark policy or creation of a new benchmark. However the time limit must be viewed reasonably and creation of a new compliant benchmark and appropriate testing to ensure its market representativeness may take longer than a fixed period of 24 months. The time limit should ensure its existence will not have an undue impact on the underlying market or investors. Thus further study with financial market participants is required to better gauge both the time limit and minimum number of users.

Q108: Is the envisaged identification process of non-compliant benchmarks adequate? Do you have other suggestions?

No suggestions at this time.

Q109: Is the envisaged procedure enabling the competent authority to perform the assessment required by Article 39(3) correct in your view? Please advise what shall be considered in addition.

TYPE YOUR TEXT HERE

Q110: Which information it would be opportune to receive by benchmark providers on the one side and benchmark users that are supervised entities on the other side?

TYPE YOUR TEXT HERE

Q111: Do you agree that the different users of a benchmark that are supervised entities should liaise directly with the competent authority of the administrator and not with the respective competent authorities (if different)?

IIA does not feel that users of benchmarks should liaise directly with the competent authority of the admin-istrator’s authority. Often times, the benchmark administrator authority will be a different regulatory au-thority or agency and not part of the user’s direct notion. What is most important is that all involved Com-petent Authorities have the same understanding and interpretation of the rules. We encourage ESMA to continue its work on ensuring harmonized implementation of EU legislation also as regards the Bench-mark Regulation.

Q112: Would it be possible for relevant benchmark providers/users that are supervised entities to provide to the competent authority an estimate of the number and value of financial instru-ments/contracts referencing to a non-compliant benchmark being affected by the cessa-tion/adaptation of such benchmark?

It is possible for the benchmark providers/user to provide competent authority an estimate of the number and value of financial instruments provided that a reasonable time frame is agreed upon on delivery of data and that all data is viewed as estimated only. Benchmark providers are generally only able to track benchmark usage via self-reporting of benchmark users. If the users do not accurately self-report, the administrator will not be aware and thus cannot be held accountable for that accurate tracking of bench-mark usage especially in determination of critical and significant benchmarks.

Q113: Would it be possible to evaluate how many out of these financial contracts or financial instru-ments are affected in a manner that the cessation/adaptation of the non-compliant benchmark would result in a force majeure event or frustration of contracts?

Listed and publicly available financial contracts could be tracked for each benchmark and an estimate then created on the affected quantity from a cessation/adaptation of non-compliant benchmarks.